Test Report: Docker_Linux_crio 21664

                    
                      fca5789b7681da792c5737c174f2f0168409bc21:2025-10-17:41948
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.26
35 TestAddons/parallel/Registry 14.82
36 TestAddons/parallel/RegistryCreds 0.4
37 TestAddons/parallel/Ingress 151.96
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.31
41 TestAddons/parallel/CSI 48.38
42 TestAddons/parallel/Headlamp 2.63
43 TestAddons/parallel/CloudSpanner 5.29
44 TestAddons/parallel/LocalPath 15.16
45 TestAddons/parallel/NvidiaDevicePlugin 5.24
46 TestAddons/parallel/Yakd 5.25
47 TestAddons/parallel/AmdGpuDevicePlugin 5.25
98 TestFunctional/parallel/ServiceCmdConnect 603.08
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.04
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.11
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
145 TestFunctional/parallel/ServiceCmd/DeployApp 600.76
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
154 TestFunctional/parallel/ServiceCmd/Format 0.54
155 TestFunctional/parallel/ServiceCmd/URL 0.55
191 TestJSONOutput/pause/Command 2.34
197 TestJSONOutput/unpause/Command 1.86
278 TestPause/serial/Pause 7.27
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.27
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.12
310 TestStartStop/group/old-k8s-version/serial/Pause 5.73
316 TestStartStop/group/no-preload/serial/Pause 6.46
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.58
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.18
335 TestStartStop/group/newest-cni/serial/Pause 5.71
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.36
344 TestStartStop/group/embed-certs/serial/Pause 5.95
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.6
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable volcano --alsologtostderr -v=1: exit status 11 (254.510322ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:04.452079  148645 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:04.452365  148645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:04.452374  148645 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:04.452379  148645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:04.452615  148645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:04.452933  148645 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:04.453326  148645 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:04.453343  148645 addons.go:606] checking whether the cluster is paused
	I1017 19:28:04.453426  148645 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:04.453439  148645 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:04.453886  148645 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:04.473642  148645 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:04.473707  148645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:04.495023  148645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:04.591690  148645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:04.591812  148645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:04.624467  148645 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:04.624493  148645 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:04.624498  148645 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:04.624502  148645 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:04.624505  148645 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:04.624509  148645 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:04.624512  148645 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:04.624514  148645 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:04.624516  148645 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:04.624521  148645 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:04.624524  148645 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:04.624526  148645 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:04.624529  148645 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:04.624532  148645 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:04.624535  148645 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:04.624539  148645 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:04.624541  148645 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:04.624545  148645 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:04.624548  148645 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:04.624550  148645 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:04.624553  148645 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:04.624555  148645 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:04.624557  148645 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:04.624560  148645 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:04.624562  148645 cri.go:89] found id: ""
	I1017 19:28:04.624601  148645 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:04.639914  148645 out.go:203] 
	W1017 19:28:04.641900  148645 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:04.641927  148645 out.go:285] * 
	* 
	W1017 19:28:04.645411  148645 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:04.647421  148645 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.451654ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-ns7g9" [eacf9d9f-262f-4bd2-b0a0-f13212de3b0d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003550631s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-5gbvf" [0f8d0ee8-125b-4765-824e-19053a0dcfe6] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002936469s
addons_test.go:392: (dbg) Run:  kubectl --context addons-808548 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-808548 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-808548 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.366658341s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 ip
2025/10/17 19:28:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable registry --alsologtostderr -v=1: exit status 11 (243.145818ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:28.174724  150580 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:28.175036  150580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:28.175046  150580 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:28.175051  150580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:28.175250  150580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:28.175501  150580 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:28.175925  150580 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:28.175945  150580 addons.go:606] checking whether the cluster is paused
	I1017 19:28:28.176035  150580 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:28.176047  150580 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:28.176417  150580 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:28.197383  150580 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:28.197439  150580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:28.216368  150580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:28.312723  150580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:28.312841  150580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:28.344130  150580 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:28.344155  150580 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:28.344159  150580 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:28.344162  150580 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:28.344165  150580 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:28.344171  150580 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:28.344174  150580 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:28.344177  150580 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:28.344179  150580 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:28.344192  150580 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:28.344196  150580 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:28.344200  150580 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:28.344203  150580 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:28.344209  150580 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:28.344218  150580 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:28.344224  150580 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:28.344231  150580 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:28.344237  150580 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:28.344240  150580 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:28.344244  150580 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:28.344248  150580 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:28.344252  150580 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:28.344264  150580 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:28.344270  150580 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:28.344272  150580 cri.go:89] found id: ""
	I1017 19:28:28.344315  150580 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:28.359631  150580 out.go:203] 
	W1017 19:28:28.360827  150580 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:28.360847  150580 out.go:285] * 
	* 
	W1017 19:28:28.364601  150580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:28.366065  150580 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.82s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.426426ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-808548
addons_test.go:332: (dbg) Run:  kubectl --context addons-808548 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (239.807854ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:37.426585  151971 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:37.426901  151971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:37.426913  151971 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:37.426920  151971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:37.427171  151971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:37.427473  151971 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:37.427886  151971 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:37.427907  151971 addons.go:606] checking whether the cluster is paused
	I1017 19:28:37.428021  151971 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:37.428038  151971 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:37.428440  151971 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:37.447111  151971 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:37.447183  151971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:37.466464  151971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:37.562432  151971 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:37.562527  151971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:37.592616  151971 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:37.592638  151971 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:37.592642  151971 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:37.592645  151971 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:37.592648  151971 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:37.592651  151971 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:37.592654  151971 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:37.592656  151971 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:37.592659  151971 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:37.592664  151971 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:37.592667  151971 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:37.592669  151971 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:37.592672  151971 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:37.592674  151971 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:37.592676  151971 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:37.592683  151971 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:37.592685  151971 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:37.592689  151971 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:37.592691  151971 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:37.592694  151971 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:37.592698  151971 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:37.592703  151971 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:37.592707  151971 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:37.592710  151971 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:37.592714  151971 cri.go:89] found id: ""
	I1017 19:28:37.592782  151971 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:37.607224  151971 out.go:203] 
	W1017 19:28:37.608942  151971 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:37.608968  151971 out.go:285] * 
	* 
	W1017 19:28:37.612075  151971 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:37.613997  151971 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (151.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-808548 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-808548 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-808548 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2f32e7eb-cfb1-437a-bb9b-b1ca4410297a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2f32e7eb-cfb1-437a-bb9b-b1ca4410297a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003399758s
I1017 19:28:40.833186  139217 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.148908179s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-808548 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-808548
helpers_test.go:243: (dbg) docker inspect addons-808548:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89",
	        "Created": "2025-10-17T19:25:58.610025851Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 141183,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:25:58.653002983Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89/hostname",
	        "HostsPath": "/var/lib/docker/containers/8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89/hosts",
	        "LogPath": "/var/lib/docker/containers/8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89/8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89-json.log",
	        "Name": "/addons-808548",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-808548:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-808548",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89",
	                "LowerDir": "/var/lib/docker/overlay2/0bbf6542911523bcf60aa175ebdc26146bf7f2dd177486aca0eb2c801bf3f352-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0bbf6542911523bcf60aa175ebdc26146bf7f2dd177486aca0eb2c801bf3f352/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0bbf6542911523bcf60aa175ebdc26146bf7f2dd177486aca0eb2c801bf3f352/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0bbf6542911523bcf60aa175ebdc26146bf7f2dd177486aca0eb2c801bf3f352/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-808548",
	                "Source": "/var/lib/docker/volumes/addons-808548/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-808548",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-808548",
	                "name.minikube.sigs.k8s.io": "addons-808548",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d88c341d13426cf6f42955cabbd4732e0f1d8e9c3b1f9f3690ab228f8efa3a5",
	            "SandboxKey": "/var/run/docker/netns/4d88c341d134",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-808548": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:cd:7d:bf:e8:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cd6e83943d7923cce77d4b5c86646887375a6d303d2552d2f1e760e4a6261218",
	                    "EndpointID": "02f6ce3169aa5061bf53b42b51b81b8c960732d144e806904f533987c937f989",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-808548",
	                        "8ba8a9320a55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-808548 -n addons-808548
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-808548 logs -n 25: (1.272572254s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-524976 --alsologtostderr --binary-mirror http://127.0.0.1:38925 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-524976 │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │                     │
	│ delete  │ -p binary-mirror-524976                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-524976 │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ addons  │ disable dashboard -p addons-808548                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │                     │
	│ addons  │ enable dashboard -p addons-808548                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │                     │
	│ start   │ -p addons-808548 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:28 UTC │
	│ addons  │ addons-808548 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ addons-808548 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ enable headlamp -p addons-808548 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ addons-808548 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ addons-808548 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ addons-808548 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ addons-808548 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ addons-808548 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ ip      │ addons-808548 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │ 17 Oct 25 19:28 UTC │
	│ addons  │ addons-808548 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ addons-808548 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ addons-808548 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-808548                                                                                                                                                                                                                                                                                                                                                                                           │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │ 17 Oct 25 19:28 UTC │
	│ addons  │ addons-808548 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ ssh     │ addons-808548 ssh cat /opt/local-path-provisioner/pvc-c3b2c4b5-817c-4b9b-a34b-00566c5e90d3_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │ 17 Oct 25 19:28 UTC │
	│ addons  │ addons-808548 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ ssh     │ addons-808548 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ addons-808548 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:29 UTC │                     │
	│ addons  │ addons-808548 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:29 UTC │                     │
	│ ip      │ addons-808548 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-808548        │ jenkins │ v1.37.0 │ 17 Oct 25 19:30 UTC │ 17 Oct 25 19:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:25:33
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:25:33.702059  140531 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:25:33.702302  140531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:25:33.702310  140531 out.go:374] Setting ErrFile to fd 2...
	I1017 19:25:33.702314  140531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:25:33.702542  140531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:25:33.703140  140531 out.go:368] Setting JSON to false
	I1017 19:25:33.704031  140531 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4082,"bootTime":1760725052,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:25:33.704133  140531 start.go:141] virtualization: kvm guest
	I1017 19:25:33.706399  140531 out.go:179] * [addons-808548] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:25:33.708105  140531 notify.go:220] Checking for updates...
	I1017 19:25:33.708153  140531 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 19:25:33.709762  140531 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:25:33.711490  140531 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 19:25:33.713131  140531 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 19:25:33.714643  140531 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:25:33.716093  140531 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:25:33.717999  140531 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:25:33.742798  140531 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:25:33.742906  140531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:25:33.801327  140531 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-17 19:25:33.791638879 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:25:33.801470  140531 docker.go:318] overlay module found
	I1017 19:25:33.803563  140531 out.go:179] * Using the docker driver based on user configuration
	I1017 19:25:33.805146  140531 start.go:305] selected driver: docker
	I1017 19:25:33.805166  140531 start.go:925] validating driver "docker" against <nil>
	I1017 19:25:33.805180  140531 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:25:33.805821  140531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:25:33.867277  140531 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-17 19:25:33.857612227 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:25:33.867449  140531 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:25:33.867724  140531 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:25:33.869810  140531 out.go:179] * Using Docker driver with root privileges
	I1017 19:25:33.871462  140531 cni.go:84] Creating CNI manager for ""
	I1017 19:25:33.871529  140531 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:25:33.871540  140531 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 19:25:33.871614  140531 start.go:349] cluster config:
	{Name:addons-808548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-808548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1017 19:25:33.873223  140531 out.go:179] * Starting "addons-808548" primary control-plane node in "addons-808548" cluster
	I1017 19:25:33.874801  140531 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:25:33.876158  140531 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:25:33.877358  140531 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:25:33.877405  140531 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:25:33.877417  140531 cache.go:58] Caching tarball of preloaded images
	I1017 19:25:33.877463  140531 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:25:33.877510  140531 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:25:33.877522  140531 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:25:33.877870  140531 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/config.json ...
	I1017 19:25:33.877899  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/config.json: {Name:mkaca1513894a0aae948fe803cc8ba28d52d6cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:25:33.894234  140531 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 19:25:33.894361  140531 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 19:25:33.894382  140531 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1017 19:25:33.894390  140531 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1017 19:25:33.894399  140531 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1017 19:25:33.894404  140531 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1017 19:25:46.520011  140531 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1017 19:25:46.520057  140531 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:25:46.520091  140531 start.go:360] acquireMachinesLock for addons-808548: {Name:mk65579f0f6a86b497afc62e2daab2619360d7ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:25:46.520203  140531 start.go:364] duration metric: took 90.409µs to acquireMachinesLock for "addons-808548"
	I1017 19:25:46.520228  140531 start.go:93] Provisioning new machine with config: &{Name:addons-808548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-808548 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:25:46.520317  140531 start.go:125] createHost starting for "" (driver="docker")
	I1017 19:25:46.522441  140531 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1017 19:25:46.522692  140531 start.go:159] libmachine.API.Create for "addons-808548" (driver="docker")
	I1017 19:25:46.522728  140531 client.go:168] LocalClient.Create starting
	I1017 19:25:46.522886  140531 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem
	I1017 19:25:46.629133  140531 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem
	I1017 19:25:46.958127  140531 cli_runner.go:164] Run: docker network inspect addons-808548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 19:25:46.975826  140531 cli_runner.go:211] docker network inspect addons-808548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 19:25:46.975916  140531 network_create.go:284] running [docker network inspect addons-808548] to gather additional debugging logs...
	I1017 19:25:46.975946  140531 cli_runner.go:164] Run: docker network inspect addons-808548
	W1017 19:25:46.993713  140531 cli_runner.go:211] docker network inspect addons-808548 returned with exit code 1
	I1017 19:25:46.993759  140531 network_create.go:287] error running [docker network inspect addons-808548]: docker network inspect addons-808548: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-808548 not found
	I1017 19:25:46.993777  140531 network_create.go:289] output of [docker network inspect addons-808548]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-808548 not found
	
	** /stderr **
	I1017 19:25:46.993905  140531 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:25:47.012503  140531 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00163ab20}
	I1017 19:25:47.012557  140531 network_create.go:124] attempt to create docker network addons-808548 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1017 19:25:47.012629  140531 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-808548 addons-808548
	I1017 19:25:47.072107  140531 network_create.go:108] docker network addons-808548 192.168.49.0/24 created
	I1017 19:25:47.072144  140531 kic.go:121] calculated static IP "192.168.49.2" for the "addons-808548" container
	I1017 19:25:47.072224  140531 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 19:25:47.089189  140531 cli_runner.go:164] Run: docker volume create addons-808548 --label name.minikube.sigs.k8s.io=addons-808548 --label created_by.minikube.sigs.k8s.io=true
	I1017 19:25:47.108479  140531 oci.go:103] Successfully created a docker volume addons-808548
	I1017 19:25:47.108600  140531 cli_runner.go:164] Run: docker run --rm --name addons-808548-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-808548 --entrypoint /usr/bin/test -v addons-808548:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 19:25:54.075981  140531 cli_runner.go:217] Completed: docker run --rm --name addons-808548-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-808548 --entrypoint /usr/bin/test -v addons-808548:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.967337197s)
	I1017 19:25:54.076027  140531 oci.go:107] Successfully prepared a docker volume addons-808548
	I1017 19:25:54.076071  140531 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:25:54.076102  140531 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 19:25:54.076170  140531 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-808548:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 19:25:58.534137  140531 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-808548:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.457922089s)
	I1017 19:25:58.534168  140531 kic.go:203] duration metric: took 4.458063007s to extract preloaded images to volume ...
	W1017 19:25:58.534446  140531 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 19:25:58.534523  140531 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 19:25:58.534583  140531 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 19:25:58.592371  140531 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-808548 --name addons-808548 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-808548 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-808548 --network addons-808548 --ip 192.168.49.2 --volume addons-808548:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 19:25:58.890300  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Running}}
	I1017 19:25:58.909243  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:25:58.928180  140531 cli_runner.go:164] Run: docker exec addons-808548 stat /var/lib/dpkg/alternatives/iptables
	I1017 19:25:58.978313  140531 oci.go:144] the created container "addons-808548" has a running status.
	I1017 19:25:58.978351  140531 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa...
	I1017 19:25:59.133207  140531 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 19:25:59.159672  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:25:59.185144  140531 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 19:25:59.185173  140531 kic_runner.go:114] Args: [docker exec --privileged addons-808548 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 19:25:59.243295  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:25:59.264667  140531 machine.go:93] provisionDockerMachine start ...
	I1017 19:25:59.264799  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:25:59.287032  140531 main.go:141] libmachine: Using SSH client type: native
	I1017 19:25:59.287374  140531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32889 <nil> <nil>}
	I1017 19:25:59.287396  140531 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:25:59.426879  140531 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-808548
	
	I1017 19:25:59.426911  140531 ubuntu.go:182] provisioning hostname "addons-808548"
	I1017 19:25:59.426976  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:25:59.446144  140531 main.go:141] libmachine: Using SSH client type: native
	I1017 19:25:59.446413  140531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32889 <nil> <nil>}
	I1017 19:25:59.446436  140531 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-808548 && echo "addons-808548" | sudo tee /etc/hostname
	I1017 19:25:59.594579  140531 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-808548
	
	I1017 19:25:59.594667  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:25:59.614344  140531 main.go:141] libmachine: Using SSH client type: native
	I1017 19:25:59.614626  140531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32889 <nil> <nil>}
	I1017 19:25:59.614651  140531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-808548' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-808548/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-808548' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:25:59.750778  140531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:25:59.750819  140531 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 19:25:59.750874  140531 ubuntu.go:190] setting up certificates
	I1017 19:25:59.750890  140531 provision.go:84] configureAuth start
	I1017 19:25:59.750946  140531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-808548
	I1017 19:25:59.768583  140531 provision.go:143] copyHostCerts
	I1017 19:25:59.768665  140531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 19:25:59.768831  140531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 19:25:59.768907  140531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 19:25:59.768961  140531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.addons-808548 san=[127.0.0.1 192.168.49.2 addons-808548 localhost minikube]
	I1017 19:25:59.872056  140531 provision.go:177] copyRemoteCerts
	I1017 19:25:59.872117  140531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:25:59.872153  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:25:59.890140  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:25:59.988467  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:26:00.009604  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:26:00.028169  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:26:00.047530  140531 provision.go:87] duration metric: took 296.620058ms to configureAuth
	I1017 19:26:00.047571  140531 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:26:00.047756  140531 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:26:00.047857  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:00.066462  140531 main.go:141] libmachine: Using SSH client type: native
	I1017 19:26:00.066677  140531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32889 <nil> <nil>}
	I1017 19:26:00.066696  140531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:26:00.319719  140531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:26:00.319764  140531 machine.go:96] duration metric: took 1.055065357s to provisionDockerMachine
	I1017 19:26:00.319779  140531 client.go:171] duration metric: took 13.79704377s to LocalClient.Create
	I1017 19:26:00.319795  140531 start.go:167] duration metric: took 13.797105592s to libmachine.API.Create "addons-808548"
	I1017 19:26:00.319803  140531 start.go:293] postStartSetup for "addons-808548" (driver="docker")
	I1017 19:26:00.319812  140531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:26:00.319863  140531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:26:00.319911  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:00.338666  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:00.438674  140531 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:26:00.442440  140531 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:26:00.442473  140531 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:26:00.442489  140531 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 19:26:00.442562  140531 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 19:26:00.442598  140531 start.go:296] duration metric: took 122.788114ms for postStartSetup
	I1017 19:26:00.443053  140531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-808548
	I1017 19:26:00.461136  140531 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/config.json ...
	I1017 19:26:00.461420  140531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:26:00.461465  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:00.480436  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:00.575236  140531 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:26:00.580154  140531 start.go:128] duration metric: took 14.059814057s to createHost
	I1017 19:26:00.580182  140531 start.go:83] releasing machines lock for "addons-808548", held for 14.059967201s
	I1017 19:26:00.580262  140531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-808548
	I1017 19:26:00.598196  140531 ssh_runner.go:195] Run: cat /version.json
	I1017 19:26:00.598259  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:00.598315  140531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:26:00.598418  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:00.616979  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:00.617607  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:00.765637  140531 ssh_runner.go:195] Run: systemctl --version
	I1017 19:26:00.772338  140531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:26:00.811240  140531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:26:00.816296  140531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:26:00.816375  140531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:26:00.844652  140531 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 19:26:00.844676  140531 start.go:495] detecting cgroup driver to use...
	I1017 19:26:00.844707  140531 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:26:00.844786  140531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:26:00.860778  140531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:26:00.874044  140531 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:26:00.874109  140531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:26:00.891423  140531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:26:00.910090  140531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:26:00.990423  140531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:26:01.079181  140531 docker.go:234] disabling docker service ...
	I1017 19:26:01.079259  140531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:26:01.099718  140531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:26:01.113539  140531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:26:01.197576  140531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:26:01.282449  140531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:26:01.295997  140531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:26:01.311384  140531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:26:01.311448  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.323160  140531 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:26:01.323227  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.333122  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.342803  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.352540  140531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:26:01.361778  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.371558  140531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.386774  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.396473  140531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:26:01.404758  140531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:26:01.412679  140531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:26:01.492206  140531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:26:01.598856  140531 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:26:01.598932  140531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:26:01.603314  140531 start.go:563] Will wait 60s for crictl version
	I1017 19:26:01.603381  140531 ssh_runner.go:195] Run: which crictl
	I1017 19:26:01.607469  140531 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:26:01.633262  140531 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:26:01.633370  140531 ssh_runner.go:195] Run: crio --version
	I1017 19:26:01.663013  140531 ssh_runner.go:195] Run: crio --version
	I1017 19:26:01.693534  140531 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:26:01.695074  140531 cli_runner.go:164] Run: docker network inspect addons-808548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:26:01.712397  140531 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:26:01.716843  140531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:26:01.728267  140531 kubeadm.go:883] updating cluster {Name:addons-808548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-808548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:26:01.728387  140531 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:26:01.728435  140531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:26:01.761040  140531 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:26:01.761063  140531 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:26:01.761113  140531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:26:01.787916  140531 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:26:01.787941  140531 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:26:01.787949  140531 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:26:01.788037  140531 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-808548 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-808548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:26:01.788103  140531 ssh_runner.go:195] Run: crio config
	I1017 19:26:01.835602  140531 cni.go:84] Creating CNI manager for ""
	I1017 19:26:01.835633  140531 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:26:01.835657  140531 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:26:01.835685  140531 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-808548 NodeName:addons-808548 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:26:01.835874  140531 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-808548"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:26:01.835953  140531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:26:01.844400  140531 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:26:01.844471  140531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:26:01.852769  140531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:26:01.865783  140531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:26:01.882589  140531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1017 19:26:01.895872  140531 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:26:01.899694  140531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:26:01.910422  140531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:26:01.989983  140531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:26:02.015299  140531 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548 for IP: 192.168.49.2
	I1017 19:26:02.015327  140531 certs.go:195] generating shared ca certs ...
	I1017 19:26:02.015354  140531 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.015520  140531 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 19:26:02.193219  140531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt ...
	I1017 19:26:02.193252  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt: {Name:mkfc088070143abbd0f930c07946609512d7ef36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.193436  140531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key ...
	I1017 19:26:02.193448  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key: {Name:mkaa7e58b0af7a6942d2615741dff1bed8e2be43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.193525  140531 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 19:26:02.409464  140531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt ...
	I1017 19:26:02.409499  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt: {Name:mk2e3f8e8d70d69eb6b5b9f14918e8b1168d78ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.409671  140531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key ...
	I1017 19:26:02.409687  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key: {Name:mkcb1da175e68492f6a06b0defa317fba200f634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.409791  140531 certs.go:257] generating profile certs ...
	I1017 19:26:02.409859  140531 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.key
	I1017 19:26:02.409875  140531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt with IP's: []
	I1017 19:26:02.619553  140531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt ...
	I1017 19:26:02.619587  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: {Name:mk665ca13c7fdca90358a51795e776aa2181e3ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.619770  140531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.key ...
	I1017 19:26:02.619782  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.key: {Name:mk92a5b10d31d3914366c137af2c424e55c73bfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.619860  140531 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key.82446dd2
	I1017 19:26:02.619881  140531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt.82446dd2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1017 19:26:02.945779  140531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt.82446dd2 ...
	I1017 19:26:02.945820  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt.82446dd2: {Name:mk7cd72641b8baf28c795da2bb5867be4971f6d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.946006  140531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key.82446dd2 ...
	I1017 19:26:02.946019  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key.82446dd2: {Name:mkb6f19c6773ac83b8e937425fcc7f0a377d682c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.946101  140531 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt.82446dd2 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt
	I1017 19:26:02.946183  140531 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key.82446dd2 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key
	I1017 19:26:02.946259  140531 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.key
	I1017 19:26:02.946277  140531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.crt with IP's: []
	I1017 19:26:03.017201  140531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.crt ...
	I1017 19:26:03.017234  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.crt: {Name:mke571d81ce4e4b4899edc553a51d0cad4d1f265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:03.017398  140531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.key ...
	I1017 19:26:03.017411  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.key: {Name:mk53a17cf441dd5672ed895c266ecfd7051a21f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:03.017580  140531 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:26:03.017618  140531 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 19:26:03.017642  140531 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:26:03.017665  140531 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 19:26:03.018233  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:26:03.037338  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 19:26:03.056438  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:26:03.075692  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:26:03.094642  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 19:26:03.112993  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:26:03.131370  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:26:03.150550  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 19:26:03.169440  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:26:03.190973  140531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:26:03.205611  140531 ssh_runner.go:195] Run: openssl version
	I1017 19:26:03.212027  140531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:26:03.224093  140531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:26:03.228242  140531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:26:03.228355  140531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:26:03.263058  140531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:26:03.272337  140531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:26:03.276253  140531 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 19:26:03.276304  140531 kubeadm.go:400] StartCluster: {Name:addons-808548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-808548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:26:03.276395  140531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:26:03.276452  140531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:26:03.304781  140531 cri.go:89] found id: ""
	I1017 19:26:03.304870  140531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:26:03.313357  140531 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 19:26:03.321754  140531 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 19:26:03.321846  140531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 19:26:03.329794  140531 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 19:26:03.329831  140531 kubeadm.go:157] found existing configuration files:
	
	I1017 19:26:03.329873  140531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 19:26:03.337716  140531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 19:26:03.337798  140531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 19:26:03.345677  140531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 19:26:03.353811  140531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 19:26:03.353896  140531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 19:26:03.361937  140531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 19:26:03.369889  140531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 19:26:03.369954  140531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 19:26:03.378097  140531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 19:26:03.386480  140531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 19:26:03.386545  140531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 19:26:03.394561  140531 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 19:26:03.453845  140531 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 19:26:03.511106  140531 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 19:26:14.159330  140531 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 19:26:14.159395  140531 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 19:26:14.159507  140531 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 19:26:14.159597  140531 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 19:26:14.159651  140531 kubeadm.go:318] OS: Linux
	I1017 19:26:14.159709  140531 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 19:26:14.159807  140531 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 19:26:14.159881  140531 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 19:26:14.159965  140531 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 19:26:14.160026  140531 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 19:26:14.160101  140531 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 19:26:14.160153  140531 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 19:26:14.160196  140531 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 19:26:14.160305  140531 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 19:26:14.160395  140531 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 19:26:14.160478  140531 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 19:26:14.160584  140531 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 19:26:14.162556  140531 out.go:252]   - Generating certificates and keys ...
	I1017 19:26:14.162628  140531 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 19:26:14.162686  140531 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 19:26:14.162769  140531 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 19:26:14.162827  140531 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 19:26:14.162911  140531 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 19:26:14.162978  140531 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 19:26:14.163078  140531 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 19:26:14.163259  140531 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-808548 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 19:26:14.163338  140531 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 19:26:14.163508  140531 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-808548 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 19:26:14.163603  140531 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 19:26:14.163685  140531 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 19:26:14.163727  140531 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 19:26:14.163830  140531 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 19:26:14.163884  140531 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 19:26:14.163952  140531 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 19:26:14.164009  140531 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 19:26:14.164067  140531 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 19:26:14.164112  140531 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 19:26:14.164176  140531 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 19:26:14.164234  140531 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 19:26:14.165674  140531 out.go:252]   - Booting up control plane ...
	I1017 19:26:14.165786  140531 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:26:14.165853  140531 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:26:14.165908  140531 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:26:14.165993  140531 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:26:14.166095  140531 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 19:26:14.166184  140531 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 19:26:14.166259  140531 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:26:14.166302  140531 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:26:14.166421  140531 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 19:26:14.166522  140531 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 19:26:14.166597  140531 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001758969s
	I1017 19:26:14.166679  140531 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 19:26:14.166772  140531 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1017 19:26:14.166849  140531 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 19:26:14.166941  140531 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 19:26:14.167025  140531 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.307991069s
	I1017 19:26:14.167096  140531 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.770583697s
	I1017 19:26:14.167155  140531 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501807006s
	I1017 19:26:14.167264  140531 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 19:26:14.167402  140531 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 19:26:14.167467  140531 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 19:26:14.167640  140531 kubeadm.go:318] [mark-control-plane] Marking the node addons-808548 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 19:26:14.167754  140531 kubeadm.go:318] [bootstrap-token] Using token: me1c77.otz9569wj37o7b0e
	I1017 19:26:14.169390  140531 out.go:252]   - Configuring RBAC rules ...
	I1017 19:26:14.169514  140531 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:26:14.169629  140531 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:26:14.169885  140531 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:26:14.170005  140531 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:26:14.170096  140531 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:26:14.170164  140531 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:26:14.170272  140531 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:26:14.170329  140531 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:26:14.170392  140531 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:26:14.170402  140531 kubeadm.go:318] 
	I1017 19:26:14.170507  140531 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:26:14.170528  140531 kubeadm.go:318] 
	I1017 19:26:14.170630  140531 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:26:14.170642  140531 kubeadm.go:318] 
	I1017 19:26:14.170677  140531 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:26:14.170778  140531 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:26:14.170849  140531 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:26:14.170860  140531 kubeadm.go:318] 
	I1017 19:26:14.170932  140531 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:26:14.170948  140531 kubeadm.go:318] 
	I1017 19:26:14.171023  140531 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:26:14.171039  140531 kubeadm.go:318] 
	I1017 19:26:14.171107  140531 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:26:14.171174  140531 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:26:14.171234  140531 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:26:14.171241  140531 kubeadm.go:318] 
	I1017 19:26:14.171307  140531 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:26:14.171373  140531 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:26:14.171379  140531 kubeadm.go:318] 
	I1017 19:26:14.171452  140531 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token me1c77.otz9569wj37o7b0e \
	I1017 19:26:14.171569  140531 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 \
	I1017 19:26:14.171590  140531 kubeadm.go:318] 	--control-plane 
	I1017 19:26:14.171596  140531 kubeadm.go:318] 
	I1017 19:26:14.171677  140531 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:26:14.171692  140531 kubeadm.go:318] 
	I1017 19:26:14.171805  140531 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token me1c77.otz9569wj37o7b0e \
	I1017 19:26:14.171986  140531 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 
	I1017 19:26:14.172007  140531 cni.go:84] Creating CNI manager for ""
	I1017 19:26:14.172020  140531 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:26:14.174104  140531 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 19:26:14.175857  140531 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 19:26:14.180863  140531 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 19:26:14.180886  140531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 19:26:14.193843  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 19:26:14.410253  140531 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 19:26:14.410330  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:14.410361  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-808548 minikube.k8s.io/updated_at=2025_10_17T19_26_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=addons-808548 minikube.k8s.io/primary=true
	I1017 19:26:14.421885  140531 ops.go:34] apiserver oom_adj: -16
	I1017 19:26:14.495250  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:14.995943  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:15.495970  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:15.995397  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:16.495418  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:16.996296  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:17.495816  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:17.995420  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:18.496090  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:18.561677  140531 kubeadm.go:1113] duration metric: took 4.151412842s to wait for elevateKubeSystemPrivileges
	I1017 19:26:18.561719  140531 kubeadm.go:402] duration metric: took 15.285419539s to StartCluster
	I1017 19:26:18.561930  140531 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:18.562097  140531 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 19:26:18.562718  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:18.563013  140531 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:26:18.563197  140531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 19:26:18.563196  140531 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1017 19:26:18.563460  140531 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:26:18.563532  140531 addons.go:69] Setting ingress=true in profile "addons-808548"
	I1017 19:26:18.563544  140531 addons.go:69] Setting yakd=true in profile "addons-808548"
	I1017 19:26:18.563554  140531 addons.go:238] Setting addon ingress=true in "addons-808548"
	I1017 19:26:18.563531  140531 addons.go:69] Setting metrics-server=true in profile "addons-808548"
	I1017 19:26:18.563577  140531 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-808548"
	I1017 19:26:18.563584  140531 addons.go:238] Setting addon metrics-server=true in "addons-808548"
	I1017 19:26:18.563601  140531 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-808548"
	I1017 19:26:18.563619  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.563634  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.563801  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.563879  140531 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-808548"
	I1017 19:26:18.563904  140531 addons.go:69] Setting ingress-dns=true in profile "addons-808548"
	I1017 19:26:18.563923  140531 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-808548"
	I1017 19:26:18.563926  140531 addons.go:238] Setting addon ingress-dns=true in "addons-808548"
	I1017 19:26:18.563946  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.563955  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.564206  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.564256  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.564306  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.564389  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.564415  140531 addons.go:69] Setting default-storageclass=true in profile "addons-808548"
	I1017 19:26:18.564456  140531 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-808548"
	I1017 19:26:18.564565  140531 addons.go:69] Setting gcp-auth=true in profile "addons-808548"
	I1017 19:26:18.564791  140531 mustload.go:65] Loading cluster: addons-808548
	I1017 19:26:18.565116  140531 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:26:18.565181  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.565420  140531 addons.go:69] Setting cloud-spanner=true in profile "addons-808548"
	I1017 19:26:18.563568  140531 addons.go:238] Setting addon yakd=true in "addons-808548"
	I1017 19:26:18.565456  140531 addons.go:238] Setting addon cloud-spanner=true in "addons-808548"
	I1017 19:26:18.565521  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.565541  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.565879  140531 out.go:179] * Verifying Kubernetes components...
	I1017 19:26:18.566048  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.565515  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.566446  140531 addons.go:69] Setting storage-provisioner=true in profile "addons-808548"
	I1017 19:26:18.566476  140531 addons.go:238] Setting addon storage-provisioner=true in "addons-808548"
	I1017 19:26:18.566512  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.566619  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.567454  140531 addons.go:69] Setting registry=true in profile "addons-808548"
	I1017 19:26:18.567475  140531 addons.go:238] Setting addon registry=true in "addons-808548"
	I1017 19:26:18.567543  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.567690  140531 addons.go:69] Setting inspektor-gadget=true in profile "addons-808548"
	I1017 19:26:18.567707  140531 addons.go:238] Setting addon inspektor-gadget=true in "addons-808548"
	I1017 19:26:18.567732  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.568338  140531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:26:18.568792  140531 addons.go:69] Setting registry-creds=true in profile "addons-808548"
	I1017 19:26:18.569091  140531 addons.go:238] Setting addon registry-creds=true in "addons-808548"
	I1017 19:26:18.569173  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.569465  140531 addons.go:69] Setting volumesnapshots=true in profile "addons-808548"
	I1017 19:26:18.571682  140531 addons.go:238] Setting addon volumesnapshots=true in "addons-808548"
	I1017 19:26:18.571690  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.571726  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.569432  140531 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-808548"
	I1017 19:26:18.572069  140531 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-808548"
	I1017 19:26:18.572102  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.564766  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.572243  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.571035  140531 addons.go:69] Setting volcano=true in profile "addons-808548"
	I1017 19:26:18.571009  140531 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-808548"
	I1017 19:26:18.572555  140531 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-808548"
	I1017 19:26:18.572640  140531 addons.go:238] Setting addon volcano=true in "addons-808548"
	I1017 19:26:18.573867  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.573959  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.574114  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.576337  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.576384  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.577035  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.577155  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.651777  140531 addons.go:238] Setting addon default-storageclass=true in "addons-808548"
	I1017 19:26:18.653278  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.654489  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1017 19:26:18.654489  140531 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1017 19:26:18.655577  140531 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1017 19:26:18.656139  140531 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1017 19:26:18.656171  140531 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1017 19:26:18.656255  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.658428  140531 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1017 19:26:18.658505  140531 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1017 19:26:18.658597  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	W1017 19:26:18.659181  140531 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1017 19:26:18.659767  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.659931  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1017 19:26:18.661822  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1017 19:26:18.663313  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1017 19:26:18.683650  140531 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1017 19:26:18.684614  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.689776  140531 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 19:26:18.689880  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1017 19:26:18.690002  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.690230  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1017 19:26:18.690614  140531 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:26:18.690638  140531 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1017 19:26:18.690665  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1017 19:26:18.693577  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1017 19:26:18.693612  140531 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1017 19:26:18.693683  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.697384  140531 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1017 19:26:18.697477  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1017 19:26:18.697564  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.697451  140531 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1017 19:26:18.700282  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1017 19:26:18.701073  140531 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 19:26:18.701097  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1017 19:26:18.701205  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.718515  140531 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1017 19:26:18.721291  140531 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:26:18.721360  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1017 19:26:18.721650  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.723425  140531 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:26:18.723639  140531 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1017 19:26:18.723716  140531 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1017 19:26:18.725134  140531 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:26:18.725160  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:26:18.725235  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.726040  140531 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 19:26:18.726058  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1017 19:26:18.726059  140531 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 19:26:18.726073  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1017 19:26:18.726115  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.726124  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.726407  140531 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 19:26:18.726420  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1017 19:26:18.726465  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.726694  140531 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1017 19:26:18.728993  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.731698  140531 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-808548"
	I1017 19:26:18.731868  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.731896  140531 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1017 19:26:18.731871  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1017 19:26:18.732872  140531 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1017 19:26:18.732895  140531 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1017 19:26:18.732952  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.733951  140531 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:26:18.733973  140531 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:26:18.734034  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.735840  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.738822  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1017 19:26:18.738849  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1017 19:26:18.738920  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.742480  140531 out.go:179]   - Using image docker.io/registry:3.0.0
	I1017 19:26:18.742706  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.743767  140531 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1017 19:26:18.743793  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1017 19:26:18.743846  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.747179  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.747493  140531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 19:26:18.781627  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.788611  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.796819  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.797132  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.799787  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.799990  140531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:26:18.801015  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.801643  140531 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1017 19:26:18.804590  140531 out.go:179]   - Using image docker.io/busybox:stable
	I1017 19:26:18.806021  140531 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 19:26:18.806088  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1017 19:26:18.806165  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.806553  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.810049  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	W1017 19:26:18.810555  140531 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 19:26:18.810600  140531 retry.go:31] will retry after 157.494373ms: ssh: handshake failed: EOF
	I1017 19:26:18.819482  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.826149  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.852511  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	W1017 19:26:18.853534  140531 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 19:26:18.853567  140531 retry.go:31] will retry after 130.786477ms: ssh: handshake failed: EOF
	I1017 19:26:18.948001  140531 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1017 19:26:18.948035  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1017 19:26:18.950989  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 19:26:18.955801  140531 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:18.955834  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1017 19:26:18.991288  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:26:19.008091  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:19.011320  140531 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1017 19:26:19.011352  140531 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1017 19:26:19.018006  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 19:26:19.021361  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1017 19:26:19.021394  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1017 19:26:19.023582  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:26:19.024091  140531 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1017 19:26:19.024109  140531 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1017 19:26:19.029794  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 19:26:19.030132  140531 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1017 19:26:19.030153  140531 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1017 19:26:19.045335  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1017 19:26:19.062471  140531 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 19:26:19.062507  140531 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1017 19:26:19.064715  140531 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1017 19:26:19.064753  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1017 19:26:19.071176  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1017 19:26:19.074015  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1017 19:26:19.081498  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 19:26:19.083804  140531 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1017 19:26:19.083847  140531 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1017 19:26:19.085532  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 19:26:19.107339  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1017 19:26:19.109139  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 19:26:19.114938  140531 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1017 19:26:19.116966  140531 node_ready.go:35] waiting up to 6m0s for node "addons-808548" to be "Ready" ...
	I1017 19:26:19.145301  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1017 19:26:19.145413  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1017 19:26:19.150775  140531 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1017 19:26:19.150873  140531 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1017 19:26:19.177105  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1017 19:26:19.177226  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1017 19:26:19.214116  140531 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1017 19:26:19.214222  140531 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1017 19:26:19.221410  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 19:26:19.223800  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1017 19:26:19.223827  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1017 19:26:19.236560  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1017 19:26:19.236591  140531 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1017 19:26:19.279500  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1017 19:26:19.279529  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1017 19:26:19.280517  140531 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1017 19:26:19.280542  140531 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1017 19:26:19.298881  140531 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:26:19.298912  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1017 19:26:19.320207  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1017 19:26:19.320261  140531 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1017 19:26:19.352188  140531 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1017 19:26:19.352338  140531 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1017 19:26:19.357325  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1017 19:26:19.357355  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1017 19:26:19.385818  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:26:19.425916  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1017 19:26:19.425945  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1017 19:26:19.426080  140531 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1017 19:26:19.426096  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1017 19:26:19.476869  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 19:26:19.476895  140531 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1017 19:26:19.506583  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1017 19:26:19.529656  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 19:26:19.624448  140531 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-808548" context rescaled to 1 replicas
	W1017 19:26:19.963893  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:19.964012  140531 retry.go:31] will retry after 213.922627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:20.178217  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:20.271920  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.186345725s)
	I1017 19:26:20.271964  140531 addons.go:479] Verifying addon ingress=true in "addons-808548"
	I1017 19:26:20.272006  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.164616301s)
	I1017 19:26:20.272045  140531 addons.go:479] Verifying addon registry=true in "addons-808548"
	I1017 19:26:20.272071  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.162898391s)
	I1017 19:26:20.272357  140531 addons.go:479] Verifying addon metrics-server=true in "addons-808548"
	I1017 19:26:20.272101  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.050664714s)
	I1017 19:26:20.273925  140531 out.go:179] * Verifying registry addon...
	I1017 19:26:20.274069  140531 out.go:179] * Verifying ingress addon...
	I1017 19:26:20.277095  140531 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1017 19:26:20.277190  140531 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1017 19:26:20.281175  140531 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 19:26:20.281201  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:20.281813  140531 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 19:26:20.780433  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:20.780560  140531 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 19:26:20.780578  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:20.828641  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.442694894s)
	I1017 19:26:20.828720  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.322030376s)
	W1017 19:26:20.828788  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 19:26:20.828824  140531 retry.go:31] will retry after 213.515572ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 19:26:20.828952  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.299248686s)
	I1017 19:26:20.828981  140531 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-808548"
	I1017 19:26:20.830404  140531 out.go:179] * Verifying csi-hostpath-driver addon...
	I1017 19:26:20.830403  140531 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-808548 service yakd-dashboard -n yakd-dashboard
	
	I1017 19:26:20.833120  140531 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1017 19:26:20.838455  140531 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 19:26:20.838483  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:26:20.909677  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:20.909708  140531 retry.go:31] will retry after 188.633823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:21.043002  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:26:21.098951  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 19:26:21.120375  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:21.280865  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:21.280917  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:21.381726  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:21.781123  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:21.781347  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:21.836866  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:22.280818  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:22.281019  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:22.381909  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:22.780654  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:22.780895  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:22.836380  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:23.280546  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:23.280629  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:23.381178  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:23.550659  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.50760184s)
	I1017 19:26:23.550792  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.451801938s)
	W1017 19:26:23.550833  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:23.550861  140531 retry.go:31] will retry after 842.659034ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 19:26:23.620532  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:23.781093  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:23.781119  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:23.836832  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:24.281242  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:24.281304  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:24.382678  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:24.393799  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:24.780395  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:24.780526  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:24.836200  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:26:24.940639  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:24.940669  140531 retry.go:31] will retry after 1.108621186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:25.280790  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:25.280881  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:25.381553  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:25.780349  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:25.780430  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:25.836045  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:26.050282  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 19:26:26.120590  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:26.280842  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:26.280935  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:26.301268  140531 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1017 19:26:26.301355  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:26.321119  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:26.381596  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:26.432919  140531 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1017 19:26:26.446549  140531 addons.go:238] Setting addon gcp-auth=true in "addons-808548"
	I1017 19:26:26.446616  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:26.447210  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:26.468126  140531 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1017 19:26:26.468182  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:26.486572  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	W1017 19:26:26.610657  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:26.610699  140531 retry.go:31] will retry after 649.773545ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:26.612871  140531 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:26:26.614437  140531 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1017 19:26:26.615922  140531 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1017 19:26:26.615946  140531 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1017 19:26:26.630619  140531 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1017 19:26:26.630643  140531 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1017 19:26:26.644951  140531 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 19:26:26.644979  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1017 19:26:26.659313  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 19:26:26.780999  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:26.781194  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:26.836993  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:26.979531  140531 addons.go:479] Verifying addon gcp-auth=true in "addons-808548"
	I1017 19:26:26.981406  140531 out.go:179] * Verifying gcp-auth addon...
	I1017 19:26:26.985530  140531 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1017 19:26:26.988515  140531 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1017 19:26:26.988538  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:27.261285  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:27.280659  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:27.280867  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:27.336909  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:27.488807  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:27.781427  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:27.781644  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 19:26:27.815672  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:27.815717  140531 retry.go:31] will retry after 2.501516396s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:27.878215  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:27.989098  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:28.280641  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:28.280701  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:28.336226  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:28.489540  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:28.620399  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:28.780715  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:28.780862  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:28.836209  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:28.989168  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:29.280640  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:29.281031  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:29.336587  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:29.488394  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:29.780480  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:29.780557  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:29.836038  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:29.988832  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:30.280512  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:30.280759  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:30.317406  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:30.336687  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:30.489005  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:30.780566  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:30.780805  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:30.836336  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:26:30.871049  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:30.871088  140531 retry.go:31] will retry after 1.557214415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:30.988887  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:31.120575  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:31.280689  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:31.280719  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:31.336785  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:31.488927  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:31.780753  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:31.780801  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:31.836641  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:31.989272  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:32.280080  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:32.280129  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:32.337089  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:32.429225  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:32.489014  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:32.780570  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:32.780963  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:32.835995  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:26:32.975492  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:32.975536  140531 retry.go:31] will retry after 5.233525697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:32.989061  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:33.280446  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:33.280528  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:33.336063  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:33.489120  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:33.620933  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:33.779920  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:33.780179  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:33.836545  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:33.988763  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:34.280345  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:34.280385  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:34.336125  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:34.489290  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:34.780442  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:34.780599  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:34.836202  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:34.989202  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:35.280899  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:35.280956  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:35.336459  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:35.489273  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:35.783242  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:35.783391  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:35.835945  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:35.988758  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:36.120629  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:36.280392  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:36.280490  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:36.335908  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:36.490692  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:36.780189  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:36.780250  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:36.835801  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:36.988676  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:37.280385  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:37.280453  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:37.336136  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:37.489133  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:37.780682  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:37.780764  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:37.836474  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:37.988167  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:38.209894  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:38.280504  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:38.280590  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:38.336845  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:38.489077  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:38.619726  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	W1017 19:26:38.755654  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:38.755685  140531 retry.go:31] will retry after 4.412965899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:38.780592  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:38.780661  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:38.836084  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:38.988782  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:39.280191  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:39.280332  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:39.335790  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:39.488853  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:39.780411  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:39.780492  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:39.836139  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:39.988895  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:40.280058  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:40.280178  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:40.336706  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:40.488807  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:40.620217  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:40.779760  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:40.779813  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:40.836313  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:40.988937  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:41.280243  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:41.280343  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:41.335732  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:41.488683  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:41.780567  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:41.780773  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:41.836334  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:41.989425  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:42.280327  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:42.280420  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:42.336378  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:42.489508  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:42.621386  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:42.780449  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:42.780656  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:42.836093  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:42.988993  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:43.169962  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:43.282163  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:43.282174  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:43.336562  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:43.489234  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:43.726377  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:43.726412  140531 retry.go:31] will retry after 13.373427082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:43.780157  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:43.780216  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:43.836846  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:43.988603  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:44.280450  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:44.280451  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:44.336139  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:44.489338  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:44.780513  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:44.780668  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:44.836100  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:44.989289  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:45.119759  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:45.280858  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:45.280899  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:45.336923  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:45.489057  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:45.780593  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:45.780828  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:45.836849  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:45.988677  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:46.280317  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:46.280463  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:46.336000  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:46.488953  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:46.780224  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:46.780335  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:46.835872  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:46.988602  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:47.120532  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:47.280374  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:47.280673  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:47.336124  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:47.488895  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:47.780268  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:47.780455  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:47.836204  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:47.988967  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:48.280467  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:48.280670  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:48.336237  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:48.489101  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:48.780015  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:48.780211  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:48.836547  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:48.988235  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:49.279859  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:49.279947  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:49.336469  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:49.489406  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:49.620021  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:49.780121  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:49.780120  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:49.836603  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:49.988286  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:50.280818  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:50.280930  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:50.336813  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:50.488551  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:50.780031  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:50.780188  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:50.836867  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:50.988687  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:51.280725  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:51.280906  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:51.336658  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:51.488192  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:51.780162  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:51.780162  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:51.837133  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:51.989059  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:52.120811  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:52.280502  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:52.280794  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:52.336713  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:52.488458  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:52.780480  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:52.780633  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:52.836438  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:52.989431  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:53.279865  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:53.280004  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:53.336808  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:53.488552  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:53.780407  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:53.780540  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:53.836323  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:53.989420  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:54.280203  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:54.280370  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:54.335759  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:54.488330  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:54.619974  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:54.781034  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:54.781153  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:54.836544  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:54.989342  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:55.279864  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:55.280037  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:55.336532  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:55.488519  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:55.780539  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:55.780583  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:55.836124  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:55.988804  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:56.280226  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:56.280382  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:56.336215  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:56.489316  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:56.780819  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:56.780884  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:56.836685  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:56.988239  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:57.100485  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 19:26:57.120657  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:57.281495  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:57.281994  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:57.336583  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:57.488577  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:57.649711  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:57.649759  140531 retry.go:31] will retry after 18.492124279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:57.780772  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:57.780865  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:57.837066  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:57.990378  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:58.280453  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:58.280463  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:58.336345  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:58.489046  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:58.780096  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:58.780258  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:58.835880  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:58.988716  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:59.280356  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:59.280595  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:59.336164  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:59.489048  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:59.619292  140531 node_ready.go:49] node "addons-808548" is "Ready"
	I1017 19:26:59.619327  140531 node_ready.go:38] duration metric: took 40.502324802s for node "addons-808548" to be "Ready" ...
	I1017 19:26:59.619345  140531 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:26:59.619412  140531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:59.638966  140531 api_server.go:72] duration metric: took 41.075907287s to wait for apiserver process to appear ...
	I1017 19:26:59.639000  140531 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:26:59.639027  140531 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:26:59.648609  140531 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:26:59.651252  140531 api_server.go:141] control plane version: v1.34.1
	I1017 19:26:59.651293  140531 api_server.go:131] duration metric: took 12.283788ms to wait for apiserver health ...
	I1017 19:26:59.651304  140531 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:26:59.656229  140531 system_pods.go:59] 18 kube-system pods found
	I1017 19:26:59.656275  140531 system_pods.go:61] "amd-gpu-device-plugin-s9xrd" [b9ac4437-8f9f-4841-8858-358c218c25d2] Pending
	I1017 19:26:59.656285  140531 system_pods.go:61] "coredns-66bc5c9577-q7x6k" [f02e0ef6-42d8-4b0a-89a9-10488d5307dc] Pending
	I1017 19:26:59.656291  140531 system_pods.go:61] "csi-hostpath-attacher-0" [261c7cef-97b8-4198-9dfc-1693023dbcef] Pending
	I1017 19:26:59.656297  140531 system_pods.go:61] "csi-hostpath-resizer-0" [07450e65-29e6-43a9-80e7-f120cfccdb8e] Pending
	I1017 19:26:59.656302  140531 system_pods.go:61] "csi-hostpathplugin-srnfw" [62107854-6ddd-4530-82c2-823bcdaca289] Pending
	I1017 19:26:59.656307  140531 system_pods.go:61] "etcd-addons-808548" [df715b91-c74e-47d9-a49a-1669ba943c1e] Running
	I1017 19:26:59.656313  140531 system_pods.go:61] "kindnet-lwg6r" [e578c681-a2ec-4dd1-ab3e-b7ee9ed0ab7f] Running
	I1017 19:26:59.656320  140531 system_pods.go:61] "kube-apiserver-addons-808548" [89f97c7f-8789-4788-9e8e-bc061735d572] Running
	I1017 19:26:59.656325  140531 system_pods.go:61] "kube-controller-manager-addons-808548" [4c9c180a-be95-45ca-afe4-7de80c8b224e] Running
	I1017 19:26:59.656330  140531 system_pods.go:61] "kube-ingress-dns-minikube" [7233f829-bd06-422c-9013-7f76f4faf35d] Pending
	I1017 19:26:59.656339  140531 system_pods.go:61] "kube-proxy-ck6l7" [50768f34-51f0-440e-8651-5f2711d813c3] Running
	I1017 19:26:59.656344  140531 system_pods.go:61] "kube-scheduler-addons-808548" [b9521dd4-c933-4e87-afa5-505f31f56de8] Running
	I1017 19:26:59.656349  140531 system_pods.go:61] "metrics-server-85b7d694d7-q44mn" [8400f12c-e748-4220-a5b1-bd66d3cb4158] Pending
	I1017 19:26:59.656358  140531 system_pods.go:61] "registry-6b586f9694-ns7g9" [eacf9d9f-262f-4bd2-b0a0-f13212de3b0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:26:59.656365  140531 system_pods.go:61] "registry-creds-764b6fb674-d7p4h" [325a60a7-5f62-4ab1-9199-ac88319f2912] Pending
	I1017 19:26:59.656372  140531 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q75kr" [ad278525-ed53-4721-8b84-7f0e01657dd5] Pending
	I1017 19:26:59.656376  140531 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qpq25" [c0ce171d-6556-40a1-bc02-fd2db1cb57e3] Pending
	I1017 19:26:59.656381  140531 system_pods.go:61] "storage-provisioner" [00412528-a403-437c-8a95-82e04747a24b] Pending
	I1017 19:26:59.656388  140531 system_pods.go:74] duration metric: took 5.077305ms to wait for pod list to return data ...
	I1017 19:26:59.656398  140531 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:26:59.660505  140531 default_sa.go:45] found service account: "default"
	I1017 19:26:59.660540  140531 default_sa.go:55] duration metric: took 4.134125ms for default service account to be created ...
	I1017 19:26:59.660555  140531 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:26:59.663896  140531 system_pods.go:86] 19 kube-system pods found
	I1017 19:26:59.663931  140531 system_pods.go:89] "amd-gpu-device-plugin-s9xrd" [b9ac4437-8f9f-4841-8858-358c218c25d2] Pending
	I1017 19:26:59.663939  140531 system_pods.go:89] "coredns-66bc5c9577-q7x6k" [f02e0ef6-42d8-4b0a-89a9-10488d5307dc] Pending
	I1017 19:26:59.663945  140531 system_pods.go:89] "csi-hostpath-attacher-0" [261c7cef-97b8-4198-9dfc-1693023dbcef] Pending
	I1017 19:26:59.663950  140531 system_pods.go:89] "csi-hostpath-resizer-0" [07450e65-29e6-43a9-80e7-f120cfccdb8e] Pending
	I1017 19:26:59.663954  140531 system_pods.go:89] "csi-hostpathplugin-srnfw" [62107854-6ddd-4530-82c2-823bcdaca289] Pending
	I1017 19:26:59.663959  140531 system_pods.go:89] "etcd-addons-808548" [df715b91-c74e-47d9-a49a-1669ba943c1e] Running
	I1017 19:26:59.663965  140531 system_pods.go:89] "kindnet-lwg6r" [e578c681-a2ec-4dd1-ab3e-b7ee9ed0ab7f] Running
	I1017 19:26:59.663970  140531 system_pods.go:89] "kube-apiserver-addons-808548" [89f97c7f-8789-4788-9e8e-bc061735d572] Running
	I1017 19:26:59.663976  140531 system_pods.go:89] "kube-controller-manager-addons-808548" [4c9c180a-be95-45ca-afe4-7de80c8b224e] Running
	I1017 19:26:59.663989  140531 system_pods.go:89] "kube-ingress-dns-minikube" [7233f829-bd06-422c-9013-7f76f4faf35d] Pending
	I1017 19:26:59.663994  140531 system_pods.go:89] "kube-proxy-ck6l7" [50768f34-51f0-440e-8651-5f2711d813c3] Running
	I1017 19:26:59.663999  140531 system_pods.go:89] "kube-scheduler-addons-808548" [b9521dd4-c933-4e87-afa5-505f31f56de8] Running
	I1017 19:26:59.664004  140531 system_pods.go:89] "metrics-server-85b7d694d7-q44mn" [8400f12c-e748-4220-a5b1-bd66d3cb4158] Pending
	I1017 19:26:59.664008  140531 system_pods.go:89] "nvidia-device-plugin-daemonset-qh9hh" [5874d0fa-f0c2-4888-8ea5-7dda59b9164e] Pending
	I1017 19:26:59.664018  140531 system_pods.go:89] "registry-6b586f9694-ns7g9" [eacf9d9f-262f-4bd2-b0a0-f13212de3b0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:26:59.664106  140531 system_pods.go:89] "registry-creds-764b6fb674-d7p4h" [325a60a7-5f62-4ab1-9199-ac88319f2912] Pending
	I1017 19:26:59.664126  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q75kr" [ad278525-ed53-4721-8b84-7f0e01657dd5] Pending
	I1017 19:26:59.664132  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qpq25" [c0ce171d-6556-40a1-bc02-fd2db1cb57e3] Pending
	I1017 19:26:59.664138  140531 system_pods.go:89] "storage-provisioner" [00412528-a403-437c-8a95-82e04747a24b] Pending
	I1017 19:26:59.664159  140531 retry.go:31] will retry after 206.226157ms: missing components: kube-dns
	I1017 19:26:59.779945  140531 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 19:26:59.779968  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:59.779963  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:59.836096  140531 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 19:26:59.836126  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:59.878734  140531 system_pods.go:86] 20 kube-system pods found
	I1017 19:26:59.878790  140531 system_pods.go:89] "amd-gpu-device-plugin-s9xrd" [b9ac4437-8f9f-4841-8858-358c218c25d2] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 19:26:59.878801  140531 system_pods.go:89] "coredns-66bc5c9577-q7x6k" [f02e0ef6-42d8-4b0a-89a9-10488d5307dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:26:59.878809  140531 system_pods.go:89] "csi-hostpath-attacher-0" [261c7cef-97b8-4198-9dfc-1693023dbcef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:26:59.878815  140531 system_pods.go:89] "csi-hostpath-resizer-0" [07450e65-29e6-43a9-80e7-f120cfccdb8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:26:59.878820  140531 system_pods.go:89] "csi-hostpathplugin-srnfw" [62107854-6ddd-4530-82c2-823bcdaca289] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:26:59.878825  140531 system_pods.go:89] "etcd-addons-808548" [df715b91-c74e-47d9-a49a-1669ba943c1e] Running
	I1017 19:26:59.878830  140531 system_pods.go:89] "kindnet-lwg6r" [e578c681-a2ec-4dd1-ab3e-b7ee9ed0ab7f] Running
	I1017 19:26:59.878836  140531 system_pods.go:89] "kube-apiserver-addons-808548" [89f97c7f-8789-4788-9e8e-bc061735d572] Running
	I1017 19:26:59.878840  140531 system_pods.go:89] "kube-controller-manager-addons-808548" [4c9c180a-be95-45ca-afe4-7de80c8b224e] Running
	I1017 19:26:59.878850  140531 system_pods.go:89] "kube-ingress-dns-minikube" [7233f829-bd06-422c-9013-7f76f4faf35d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:26:59.878859  140531 system_pods.go:89] "kube-proxy-ck6l7" [50768f34-51f0-440e-8651-5f2711d813c3] Running
	I1017 19:26:59.878863  140531 system_pods.go:89] "kube-scheduler-addons-808548" [b9521dd4-c933-4e87-afa5-505f31f56de8] Running
	I1017 19:26:59.878868  140531 system_pods.go:89] "metrics-server-85b7d694d7-q44mn" [8400f12c-e748-4220-a5b1-bd66d3cb4158] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:26:59.878877  140531 system_pods.go:89] "nvidia-device-plugin-daemonset-qh9hh" [5874d0fa-f0c2-4888-8ea5-7dda59b9164e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:26:59.878885  140531 system_pods.go:89] "registry-6b586f9694-ns7g9" [eacf9d9f-262f-4bd2-b0a0-f13212de3b0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:26:59.878890  140531 system_pods.go:89] "registry-creds-764b6fb674-d7p4h" [325a60a7-5f62-4ab1-9199-ac88319f2912] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:26:59.878897  140531 system_pods.go:89] "registry-proxy-5gbvf" [0f8d0ee8-125b-4765-824e-19053a0dcfe6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:26:59.878911  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q75kr" [ad278525-ed53-4721-8b84-7f0e01657dd5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:26:59.878924  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qpq25" [c0ce171d-6556-40a1-bc02-fd2db1cb57e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:26:59.878936  140531 system_pods.go:89] "storage-provisioner" [00412528-a403-437c-8a95-82e04747a24b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:26:59.878956  140531 retry.go:31] will retry after 264.802509ms: missing components: kube-dns
	I1017 19:26:59.989766  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:00.149947  140531 system_pods.go:86] 20 kube-system pods found
	I1017 19:27:00.149986  140531 system_pods.go:89] "amd-gpu-device-plugin-s9xrd" [b9ac4437-8f9f-4841-8858-358c218c25d2] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 19:27:00.149998  140531 system_pods.go:89] "coredns-66bc5c9577-q7x6k" [f02e0ef6-42d8-4b0a-89a9-10488d5307dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:27:00.150007  140531 system_pods.go:89] "csi-hostpath-attacher-0" [261c7cef-97b8-4198-9dfc-1693023dbcef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:27:00.150015  140531 system_pods.go:89] "csi-hostpath-resizer-0" [07450e65-29e6-43a9-80e7-f120cfccdb8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:27:00.150023  140531 system_pods.go:89] "csi-hostpathplugin-srnfw" [62107854-6ddd-4530-82c2-823bcdaca289] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:27:00.150030  140531 system_pods.go:89] "etcd-addons-808548" [df715b91-c74e-47d9-a49a-1669ba943c1e] Running
	I1017 19:27:00.150037  140531 system_pods.go:89] "kindnet-lwg6r" [e578c681-a2ec-4dd1-ab3e-b7ee9ed0ab7f] Running
	I1017 19:27:00.150048  140531 system_pods.go:89] "kube-apiserver-addons-808548" [89f97c7f-8789-4788-9e8e-bc061735d572] Running
	I1017 19:27:00.150053  140531 system_pods.go:89] "kube-controller-manager-addons-808548" [4c9c180a-be95-45ca-afe4-7de80c8b224e] Running
	I1017 19:27:00.150060  140531 system_pods.go:89] "kube-ingress-dns-minikube" [7233f829-bd06-422c-9013-7f76f4faf35d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:27:00.150065  140531 system_pods.go:89] "kube-proxy-ck6l7" [50768f34-51f0-440e-8651-5f2711d813c3] Running
	I1017 19:27:00.150071  140531 system_pods.go:89] "kube-scheduler-addons-808548" [b9521dd4-c933-4e87-afa5-505f31f56de8] Running
	I1017 19:27:00.150079  140531 system_pods.go:89] "metrics-server-85b7d694d7-q44mn" [8400f12c-e748-4220-a5b1-bd66d3cb4158] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:27:00.150092  140531 system_pods.go:89] "nvidia-device-plugin-daemonset-qh9hh" [5874d0fa-f0c2-4888-8ea5-7dda59b9164e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:27:00.150100  140531 system_pods.go:89] "registry-6b586f9694-ns7g9" [eacf9d9f-262f-4bd2-b0a0-f13212de3b0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:27:00.150108  140531 system_pods.go:89] "registry-creds-764b6fb674-d7p4h" [325a60a7-5f62-4ab1-9199-ac88319f2912] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:27:00.150116  140531 system_pods.go:89] "registry-proxy-5gbvf" [0f8d0ee8-125b-4765-824e-19053a0dcfe6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:27:00.150132  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q75kr" [ad278525-ed53-4721-8b84-7f0e01657dd5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:27:00.150147  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qpq25" [c0ce171d-6556-40a1-bc02-fd2db1cb57e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:27:00.150160  140531 system_pods.go:89] "storage-provisioner" [00412528-a403-437c-8a95-82e04747a24b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:27:00.150186  140531 retry.go:31] will retry after 402.374722ms: missing components: kube-dns
	I1017 19:27:00.280978  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:00.281065  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:00.336895  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:00.488983  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:00.561719  140531 system_pods.go:86] 20 kube-system pods found
	I1017 19:27:00.561779  140531 system_pods.go:89] "amd-gpu-device-plugin-s9xrd" [b9ac4437-8f9f-4841-8858-358c218c25d2] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 19:27:00.561789  140531 system_pods.go:89] "coredns-66bc5c9577-q7x6k" [f02e0ef6-42d8-4b0a-89a9-10488d5307dc] Running
	I1017 19:27:00.561800  140531 system_pods.go:89] "csi-hostpath-attacher-0" [261c7cef-97b8-4198-9dfc-1693023dbcef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:27:00.561810  140531 system_pods.go:89] "csi-hostpath-resizer-0" [07450e65-29e6-43a9-80e7-f120cfccdb8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:27:00.561823  140531 system_pods.go:89] "csi-hostpathplugin-srnfw" [62107854-6ddd-4530-82c2-823bcdaca289] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:27:00.561830  140531 system_pods.go:89] "etcd-addons-808548" [df715b91-c74e-47d9-a49a-1669ba943c1e] Running
	I1017 19:27:00.561836  140531 system_pods.go:89] "kindnet-lwg6r" [e578c681-a2ec-4dd1-ab3e-b7ee9ed0ab7f] Running
	I1017 19:27:00.561843  140531 system_pods.go:89] "kube-apiserver-addons-808548" [89f97c7f-8789-4788-9e8e-bc061735d572] Running
	I1017 19:27:00.561849  140531 system_pods.go:89] "kube-controller-manager-addons-808548" [4c9c180a-be95-45ca-afe4-7de80c8b224e] Running
	I1017 19:27:00.561857  140531 system_pods.go:89] "kube-ingress-dns-minikube" [7233f829-bd06-422c-9013-7f76f4faf35d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:27:00.561862  140531 system_pods.go:89] "kube-proxy-ck6l7" [50768f34-51f0-440e-8651-5f2711d813c3] Running
	I1017 19:27:00.561868  140531 system_pods.go:89] "kube-scheduler-addons-808548" [b9521dd4-c933-4e87-afa5-505f31f56de8] Running
	I1017 19:27:00.561878  140531 system_pods.go:89] "metrics-server-85b7d694d7-q44mn" [8400f12c-e748-4220-a5b1-bd66d3cb4158] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:27:00.561887  140531 system_pods.go:89] "nvidia-device-plugin-daemonset-qh9hh" [5874d0fa-f0c2-4888-8ea5-7dda59b9164e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:27:00.561901  140531 system_pods.go:89] "registry-6b586f9694-ns7g9" [eacf9d9f-262f-4bd2-b0a0-f13212de3b0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:27:00.561909  140531 system_pods.go:89] "registry-creds-764b6fb674-d7p4h" [325a60a7-5f62-4ab1-9199-ac88319f2912] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:27:00.561921  140531 system_pods.go:89] "registry-proxy-5gbvf" [0f8d0ee8-125b-4765-824e-19053a0dcfe6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:27:00.561931  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q75kr" [ad278525-ed53-4721-8b84-7f0e01657dd5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:27:00.561940  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qpq25" [c0ce171d-6556-40a1-bc02-fd2db1cb57e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:27:00.561945  140531 system_pods.go:89] "storage-provisioner" [00412528-a403-437c-8a95-82e04747a24b] Running
	I1017 19:27:00.561961  140531 system_pods.go:126] duration metric: took 901.397753ms to wait for k8s-apps to be running ...
	I1017 19:27:00.561971  140531 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:27:00.562033  140531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:27:00.582658  140531 system_svc.go:56] duration metric: took 20.678305ms WaitForService to wait for kubelet
	I1017 19:27:00.582684  140531 kubeadm.go:586] duration metric: took 42.019634517s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:27:00.582705  140531 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:27:00.585722  140531 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:27:00.585765  140531 node_conditions.go:123] node cpu capacity is 8
	I1017 19:27:00.585785  140531 node_conditions.go:105] duration metric: took 3.075104ms to run NodePressure ...
	I1017 19:27:00.585800  140531 start.go:241] waiting for startup goroutines ...
	I1017 19:27:00.781389  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:00.781422  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:00.837033  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:00.988435  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:01.280356  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:01.280458  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:01.335963  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:01.488598  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:01.780881  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:01.780931  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:01.836889  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:01.988574  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:02.281344  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:02.281339  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:02.382430  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:02.490240  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:02.780500  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:02.780630  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:02.836889  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:02.988545  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:03.281176  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:03.281201  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:03.337874  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:03.489251  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:03.781302  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:03.781365  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:03.836674  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:03.989420  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:04.280916  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:04.281110  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:04.336706  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:04.489507  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:04.780555  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:04.780702  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:04.836556  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:04.989353  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:05.280431  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:05.280473  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:05.336634  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:05.490178  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:05.781129  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:05.781133  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:05.837528  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:05.989284  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:06.281113  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:06.281113  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:06.336992  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:06.488766  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:06.782141  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:06.782204  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:06.837094  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:06.988882  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:07.281577  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:07.281832  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:07.337404  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:07.489852  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:07.782366  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:07.782415  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:07.837400  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:07.989456  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:08.281180  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:08.281235  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:08.337506  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:08.489314  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:08.780881  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:08.780958  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:08.837215  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:08.989126  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:09.280298  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:09.280340  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:09.336719  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:09.488846  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:09.781497  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:09.781814  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:09.837290  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:09.989400  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:10.281165  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:10.281192  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:10.406382  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:10.489548  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:10.780693  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:10.780811  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:10.836779  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:10.988478  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:11.281172  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:11.281178  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:11.382238  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:11.489061  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:11.780842  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:11.780861  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:11.837605  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:11.989851  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:12.281822  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:12.281859  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:12.382658  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:12.490028  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:12.780298  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:12.780335  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:12.837155  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:12.988897  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:13.281067  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:13.281187  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:13.337329  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:13.489092  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:13.780894  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:13.781077  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:13.837570  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:13.988633  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:14.281695  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:14.281770  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:14.337097  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:14.488917  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:14.781769  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:14.781895  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:14.836895  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:14.989023  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:15.281347  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:15.281483  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:15.336337  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:15.489039  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:15.781416  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:15.781495  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:15.836356  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:15.989377  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:16.142691  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:27:16.280716  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:16.280780  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:16.336575  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:16.489336  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:16.781649  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:16.782770  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 19:27:16.806263  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:27:16.806303  140531 retry.go:31] will retry after 18.104254162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:27:16.837721  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:16.989574  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:17.281559  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:17.283339  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:17.337772  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:17.489321  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:17.782349  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:17.783374  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:17.838067  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:17.989013  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:18.283374  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:18.283578  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:18.431981  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:18.490396  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:18.785642  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:18.786808  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:18.884270  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:18.989162  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:19.281456  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:19.281499  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:19.336692  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:19.488545  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:19.780765  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:19.780905  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:19.837428  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:19.989665  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:20.281092  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:20.281268  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:20.337421  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:20.489394  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:20.781067  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:20.781313  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:20.837278  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:20.989267  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:21.280631  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:21.280730  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:21.337191  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:21.489152  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:21.781449  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:21.781656  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:21.836566  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:21.989315  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:22.280869  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:22.280898  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:22.382666  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:22.489435  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:22.781207  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:22.781258  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:22.837364  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:22.989184  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:23.280255  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:23.280279  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:23.337578  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:23.489680  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:23.781176  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:23.781181  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:23.837607  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:23.988683  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:24.307313  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:24.308083  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:24.488082  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:24.488978  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:24.819960  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:24.820160  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:24.837009  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:24.989057  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:25.280551  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:25.280616  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:25.336957  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:25.488963  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:25.781388  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:25.781566  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:25.882230  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:25.988478  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:26.280921  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:26.281214  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:26.339162  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:26.488936  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:26.781294  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:26.781550  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:26.836502  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:26.989319  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:27.280639  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:27.280730  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:27.336424  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:27.489217  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:27.781331  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:27.781957  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:27.837533  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:27.989521  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:28.280997  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:28.281545  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:28.337473  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:28.488960  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:28.781169  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:28.781320  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:28.837612  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:28.989613  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:29.281203  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:29.281222  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:29.337604  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:29.489569  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:29.780855  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:29.781125  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:29.882154  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:29.989177  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:30.280690  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:30.280864  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:30.382036  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:30.488156  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:30.780021  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:30.780227  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:30.836986  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:30.989013  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:31.280897  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:31.281002  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:31.336803  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:31.489629  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:31.781334  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:31.781415  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:31.882536  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:31.989414  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:32.280652  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:32.280923  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:32.337636  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:32.490497  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:32.781203  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:32.781239  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:32.837117  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:32.993243  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:33.280826  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:33.280843  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:33.337080  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:33.489023  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:33.780038  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:33.780136  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:33.837196  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:33.988825  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:34.280880  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:34.280902  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:34.337447  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:34.489595  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:34.781642  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:34.781811  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:34.837203  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:34.911299  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:27:34.989319  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:35.280715  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:35.280729  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:35.337816  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:35.488749  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:27:35.501343  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:27:35.501378  140531 retry.go:31] will retry after 26.661352304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:27:35.783508  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:35.786943  140531 kapi.go:107] duration metric: took 1m15.509748146s to wait for kubernetes.io/minikube-addons=registry ...
	I1017 19:27:35.838240  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:35.991678  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:36.281359  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:36.336894  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:36.490509  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:36.781438  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:36.836507  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:36.989418  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:37.281241  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:37.337408  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:37.488672  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:37.780726  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:37.836988  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:37.988877  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:38.281002  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:38.336928  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:38.489039  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:38.780706  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:38.837398  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:38.989557  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:39.280637  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:39.381883  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:39.488470  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:39.781335  140531 kapi.go:107] duration metric: took 1m19.504237672s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1017 19:27:39.836373  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:39.989616  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:40.337221  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:40.489874  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:40.837198  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:40.989447  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:41.337215  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:41.489307  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:41.947283  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:41.988398  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:42.337272  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:42.489703  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:42.837197  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:42.988649  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:43.337276  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:43.488960  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:43.836191  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:43.988808  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:44.337443  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:44.494479  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:44.837496  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:44.989462  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:45.337053  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:45.488778  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:45.837681  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:45.989461  140531 kapi.go:107] duration metric: took 1m19.003928203s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1017 19:27:45.992155  140531 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-808548 cluster.
	I1017 19:27:45.994005  140531 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1017 19:27:45.995595  140531 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1017 19:27:46.336977  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:46.836728  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:47.338984  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:47.836897  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:48.337520  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:48.836613  140531 kapi.go:107] duration metric: took 1m28.003487052s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1017 19:28:02.166341  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 19:28:02.725320  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 19:28:02.725447  140531 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1017 19:28:02.727360  140531 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, registry-creds, amd-gpu-device-plugin, default-storageclass, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1017 19:28:02.728897  140531 addons.go:514] duration metric: took 1m44.165697583s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner registry-creds amd-gpu-device-plugin default-storageclass metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1017 19:28:02.728993  140531 start.go:246] waiting for cluster config update ...
	I1017 19:28:02.729019  140531 start.go:255] writing updated cluster config ...
	I1017 19:28:02.729333  140531 ssh_runner.go:195] Run: rm -f paused
	I1017 19:28:02.734175  140531 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:28:02.738215  140531 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q7x6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.742650  140531 pod_ready.go:94] pod "coredns-66bc5c9577-q7x6k" is "Ready"
	I1017 19:28:02.742677  140531 pod_ready.go:86] duration metric: took 4.437848ms for pod "coredns-66bc5c9577-q7x6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.745069  140531 pod_ready.go:83] waiting for pod "etcd-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.750076  140531 pod_ready.go:94] pod "etcd-addons-808548" is "Ready"
	I1017 19:28:02.750125  140531 pod_ready.go:86] duration metric: took 5.029215ms for pod "etcd-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.752734  140531 pod_ready.go:83] waiting for pod "kube-apiserver-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.757147  140531 pod_ready.go:94] pod "kube-apiserver-addons-808548" is "Ready"
	I1017 19:28:02.757175  140531 pod_ready.go:86] duration metric: took 4.397202ms for pod "kube-apiserver-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.759289  140531 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:03.137700  140531 pod_ready.go:94] pod "kube-controller-manager-addons-808548" is "Ready"
	I1017 19:28:03.137733  140531 pod_ready.go:86] duration metric: took 378.417325ms for pod "kube-controller-manager-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:03.338519  140531 pod_ready.go:83] waiting for pod "kube-proxy-ck6l7" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:03.737956  140531 pod_ready.go:94] pod "kube-proxy-ck6l7" is "Ready"
	I1017 19:28:03.737985  140531 pod_ready.go:86] duration metric: took 399.429394ms for pod "kube-proxy-ck6l7" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:03.938699  140531 pod_ready.go:83] waiting for pod "kube-scheduler-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:04.338129  140531 pod_ready.go:94] pod "kube-scheduler-addons-808548" is "Ready"
	I1017 19:28:04.338159  140531 pod_ready.go:86] duration metric: took 399.433782ms for pod "kube-scheduler-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:04.338174  140531 pod_ready.go:40] duration metric: took 1.603941826s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:28:04.384487  140531 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 19:28:04.386821  140531 out.go:179] * Done! kubectl is now configured to use "addons-808548" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:29:16 addons-808548 crio[769]: time="2025-10-17T19:29:16.922907246Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-d7p4h/registry-creds" id=a7db6fb6-c764-4d63-94ae-9689fbcdadff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:29:16 addons-808548 crio[769]: time="2025-10-17T19:29:16.92368755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:29:16 addons-808548 crio[769]: time="2025-10-17T19:29:16.929022257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:29:16 addons-808548 crio[769]: time="2025-10-17T19:29:16.929455618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:29:16 addons-808548 crio[769]: time="2025-10-17T19:29:16.962159489Z" level=info msg="Created container e974c43b9027e157257f05ede1a3f86c839b07db82be9fb7c16ffae7189b011a: kube-system/registry-creds-764b6fb674-d7p4h/registry-creds" id=a7db6fb6-c764-4d63-94ae-9689fbcdadff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:29:16 addons-808548 crio[769]: time="2025-10-17T19:29:16.962803622Z" level=info msg="Starting container: e974c43b9027e157257f05ede1a3f86c839b07db82be9fb7c16ffae7189b011a" id=d9e65509-430a-4af8-8c14-e55d3f511097 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:29:16 addons-808548 crio[769]: time="2025-10-17T19:29:16.964764556Z" level=info msg="Started container" PID=8947 containerID=e974c43b9027e157257f05ede1a3f86c839b07db82be9fb7c16ffae7189b011a description=kube-system/registry-creds-764b6fb674-d7p4h/registry-creds id=d9e65509-430a-4af8-8c14-e55d3f511097 name=/runtime.v1.RuntimeService/StartContainer sandboxID=32eb112d62f48b7c27e9f14d0bf11927e3f8dcca2a9c46aaa944e1aa8db58b62
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.437909431Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-c7zmx/POD" id=3ba4bc2b-5de5-4f58-bf26-92ed9d1f8df6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.438017413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.445984841Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-c7zmx Namespace:default ID:c0638d595ab6e51c2f636a3fbbf3e3567ed79a14e3a43202bed9db429c742b9a UID:6a3babf3-ffd0-4b91-b85a-029841f7dd87 NetNS:/var/run/netns/0c1fa2ac-7f2d-47d9-ae24-f013c969d398 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000af24d8}] Aliases:map[]}"
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.446028197Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-c7zmx to CNI network \"kindnet\" (type=ptp)"
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.457542303Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-c7zmx Namespace:default ID:c0638d595ab6e51c2f636a3fbbf3e3567ed79a14e3a43202bed9db429c742b9a UID:6a3babf3-ffd0-4b91-b85a-029841f7dd87 NetNS:/var/run/netns/0c1fa2ac-7f2d-47d9-ae24-f013c969d398 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000af24d8}] Aliases:map[]}"
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.457674783Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-c7zmx for CNI network kindnet (type=ptp)"
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.458643529Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.45946358Z" level=info msg="Ran pod sandbox c0638d595ab6e51c2f636a3fbbf3e3567ed79a14e3a43202bed9db429c742b9a with infra container: default/hello-world-app-5d498dc89-c7zmx/POD" id=3ba4bc2b-5de5-4f58-bf26-92ed9d1f8df6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.46067441Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ec2ee32a-205c-46ac-a8f8-0a6b9156dd30 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.460902181Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=ec2ee32a-205c-46ac-a8f8-0a6b9156dd30 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.460951091Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=ec2ee32a-205c-46ac-a8f8-0a6b9156dd30 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.461593657Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=092e6ba7-fc19-43cd-a0e8-c9cb9cc0c52d name=/runtime.v1.ImageService/PullImage
	Oct 17 19:30:57 addons-808548 crio[769]: time="2025-10-17T19:30:57.466008446Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 17 19:30:58 addons-808548 crio[769]: time="2025-10-17T19:30:58.635429006Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=092e6ba7-fc19-43cd-a0e8-c9cb9cc0c52d name=/runtime.v1.ImageService/PullImage
	Oct 17 19:30:58 addons-808548 crio[769]: time="2025-10-17T19:30:58.636084318Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=328efd82-41fd-4312-86a5-7fde041fc947 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:30:58 addons-808548 crio[769]: time="2025-10-17T19:30:58.637670363Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3f92d8ad-90a4-4051-9f91-d6e1e201f72c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:30:58 addons-808548 crio[769]: time="2025-10-17T19:30:58.642332074Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-c7zmx/hello-world-app" id=56d3e6d7-7f68-4ba4-8cce-967bbdca8381 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:30:58 addons-808548 crio[769]: time="2025-10-17T19:30:58.643030058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	a2543ae12c8cd       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Created             hello-world-app                          0                   c0638d595ab6e       hello-world-app-5d498dc89-c7zmx             default
	e974c43b9027e       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   32eb112d62f48       registry-creds-764b6fb674-d7p4h             kube-system
	f7af0985cb013       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago            Running             nginx                                    0                   fb5a9565d165b       nginx                                       default
	a8328a95cc880       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   695751ac50690       busybox                                     default
	53d269845a83e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago            Running             csi-snapshotter                          0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	508d623947dcb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago            Running             csi-provisioner                          0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	534e46164a73e       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago            Running             liveness-probe                           0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	5579a2f9e5057       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago            Running             hostpath                                 0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	4b012af7a50a8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago            Running             gcp-auth                                 0                   d1beaa31ee332       gcp-auth-78565c9fb4-cnh4w                   gcp-auth
	57e22e20440d1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	c4e78204d42dc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago            Running             gadget                                   0                   55093274c9c7b       gadget-qzzq2                                gadget
	3ddb7aa30a0d2       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago            Running             controller                               0                   efeef1174b2fb       ingress-nginx-controller-675c5ddd98-bszbb   ingress-nginx
	9a21825a549c2       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   c784fb1dc3219       registry-proxy-5gbvf                        kube-system
	5d22bcde5dbdb       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   db6b3eb3c8b31       csi-hostpath-resizer-0                      kube-system
	9ca980090d556       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              patch                                    0                   00e3ff684c6df       ingress-nginx-admission-patch-56ccn         ingress-nginx
	56688cf87e4fa       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   6cb0e25b97918       amd-gpu-device-plugin-s9xrd                 kube-system
	59d6b1b073fe9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   30cc544c47994       snapshot-controller-7d9fbc56b8-q75kr        kube-system
	2bb7b66e533ea       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   69b0e16d66811       yakd-dashboard-5ff678cb9-qgnw5              yakd-dashboard
	71af4816f74d2       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   45ff84ab750b4       csi-hostpath-attacher-0                     kube-system
	8ad2b4d2b3966       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	e01b7f799459f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   5eca2f15a7de6       snapshot-controller-7d9fbc56b8-qpq25        kube-system
	3eadefea7b82f       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   8d64acb3a014a       nvidia-device-plugin-daemonset-qh9hh        kube-system
	b37b72284c040       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago            Running             cloud-spanner-emulator                   0                   635b739ab5d5f       cloud-spanner-emulator-86bd5cbb97-kt7zs     default
	fc2ba59434a35       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   4bd2f18c25d38       metrics-server-85b7d694d7-q44mn             kube-system
	199827710f7e2       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   41c07e0474bdc       kube-ingress-dns-minikube                   kube-system
	9978c81effa86       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   3c0fba2a66b65       local-path-provisioner-648f6765c9-29skz     local-path-storage
	91d90369c0267       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   96d22435ddd3c       ingress-nginx-admission-create-8h4tr        ingress-nginx
	5e0188d0e59ac       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   c4f7fbd236eac       registry-6b586f9694-ns7g9                   kube-system
	89b97e1cc3fdc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   f72978ef277e2       coredns-66bc5c9577-q7x6k                    kube-system
	00564264eaf2d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   74120e4daa8a3       storage-provisioner                         kube-system
	509b950592a64       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   39738ae66b812       kube-proxy-ck6l7                            kube-system
	c0f115c889023       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   08bb700090292       kindnet-lwg6r                               kube-system
	9486051a8e6db       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   ebc8ebe62e244       kube-scheduler-addons-808548                kube-system
	d471f8a340bfa       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   8d4504ba25314       kube-controller-manager-addons-808548       kube-system
	fed27e3c8e0a5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   73422ce7de852       etcd-addons-808548                          kube-system
	d41c518959459       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   80603ba658491       kube-apiserver-addons-808548                kube-system
	
	
	==> coredns [89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72] <==
	[INFO] 10.244.0.22:47249 - 12512 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004995461s
	[INFO] 10.244.0.22:44285 - 63911 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005138946s
	[INFO] 10.244.0.22:58333 - 24755 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005401496s
	[INFO] 10.244.0.22:43046 - 10475 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004828828s
	[INFO] 10.244.0.22:43047 - 9430 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004987084s
	[INFO] 10.244.0.22:40310 - 4309 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001229734s
	[INFO] 10.244.0.22:52842 - 24289 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001347579s
	[INFO] 10.244.0.24:49896 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000224644s
	[INFO] 10.244.0.24:43921 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138639s
	[INFO] 10.244.0.31:49089 - 60312 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000241056s
	[INFO] 10.244.0.31:49868 - 11016 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000208553s
	[INFO] 10.244.0.31:32770 - 28565 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00013566s
	[INFO] 10.244.0.31:58313 - 38493 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000136294s
	[INFO] 10.244.0.31:36535 - 22774 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000103668s
	[INFO] 10.244.0.31:44183 - 29504 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000084338s
	[INFO] 10.244.0.31:54395 - 60179 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003321125s
	[INFO] 10.244.0.31:44987 - 42893 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003380478s
	[INFO] 10.244.0.31:33421 - 16009 "A IN accounts.google.com.us-west1-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.004075233s
	[INFO] 10.244.0.31:58232 - 7820 "AAAA IN accounts.google.com.us-west1-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.004619618s
	[INFO] 10.244.0.31:46667 - 47774 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004601227s
	[INFO] 10.244.0.31:42143 - 9825 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006031332s
	[INFO] 10.244.0.31:45776 - 60255 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004657853s
	[INFO] 10.244.0.31:37711 - 16526 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004725242s
	[INFO] 10.244.0.31:60750 - 12913 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001845277s
	[INFO] 10.244.0.31:47739 - 44570 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001990776s
	
	
	==> describe nodes <==
	Name:               addons-808548
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-808548
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=addons-808548
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_26_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-808548
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-808548"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:26:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-808548
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:30:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:30:48 +0000   Fri, 17 Oct 2025 19:26:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:30:48 +0000   Fri, 17 Oct 2025 19:26:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:30:48 +0000   Fri, 17 Oct 2025 19:26:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:30:48 +0000   Fri, 17 Oct 2025 19:26:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-808548
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2a535284-69f6-4c0d-b477-eb46f22a04f4
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  default                     cloud-spanner-emulator-86bd5cbb97-kt7zs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  default                     hello-world-app-5d498dc89-c7zmx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gadget                      gadget-qzzq2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  gcp-auth                    gcp-auth-78565c9fb4-cnh4w                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-bszbb    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m38s
	  kube-system                 amd-gpu-device-plugin-s9xrd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 coredns-66bc5c9577-q7x6k                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m40s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 csi-hostpathplugin-srnfw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 etcd-addons-808548                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m45s
	  kube-system                 kindnet-lwg6r                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m40s
	  kube-system                 kube-apiserver-addons-808548                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-controller-manager-addons-808548        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-proxy-ck6l7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-scheduler-addons-808548                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 metrics-server-85b7d694d7-q44mn              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m39s
	  kube-system                 nvidia-device-plugin-daemonset-qh9hh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 registry-6b586f9694-ns7g9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 registry-creds-764b6fb674-d7p4h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 registry-proxy-5gbvf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 snapshot-controller-7d9fbc56b8-q75kr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 snapshot-controller-7d9fbc56b8-qpq25         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  local-path-storage          local-path-provisioner-648f6765c9-29skz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-qgnw5               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m39s  kube-proxy       
	  Normal  Starting                 4m45s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s  kubelet          Node addons-808548 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s  kubelet          Node addons-808548 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s  kubelet          Node addons-808548 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m41s  node-controller  Node addons-808548 event: Registered Node addons-808548 in Controller
	  Normal  NodeReady                3m59s  kubelet          Node addons-808548 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c] <==
	{"level":"warn","ts":"2025-10-17T19:26:10.134291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.142772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.150417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.158206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.165721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.172487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.180713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.189666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.199333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.216555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.223272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.229927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.283628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:21.289617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:21.296477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:47.723298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:47.738552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:47.745084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38984","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:27:18.764411Z","caller":"traceutil/trace.go:172","msg":"trace[1794263440] transaction","detail":"{read_only:false; response_revision:1034; number_of_response:1; }","duration":"108.930138ms","start":"2025-10-17T19:27:18.655453Z","end":"2025-10-17T19:27:18.764383Z","steps":["trace[1794263440] 'process raft request'  (duration: 108.688647ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:27:24.486551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.929495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:27:24.486631Z","caller":"traceutil/trace.go:172","msg":"trace[1243683834] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1053; }","duration":"150.057781ms","start":"2025-10-17T19:27:24.336558Z","end":"2025-10-17T19:27:24.486616Z","steps":["trace[1243683834] 'range keys from in-memory index tree'  (duration: 149.84802ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:27:24.678934Z","caller":"traceutil/trace.go:172","msg":"trace[1188612772] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"123.923087ms","start":"2025-10-17T19:27:24.554988Z","end":"2025-10-17T19:27:24.678911Z","steps":["trace[1188612772] 'process raft request'  (duration: 123.804357ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:27:41.945236Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.285017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:27:41.945312Z","caller":"traceutil/trace.go:172","msg":"trace[825653420] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1153; }","duration":"110.378418ms","start":"2025-10-17T19:27:41.834915Z","end":"2025-10-17T19:27:41.945294Z","steps":["trace[825653420] 'agreement among raft nodes before linearized reading'  (duration: 32.147742ms)","trace[825653420] 'range keys from in-memory index tree'  (duration: 78.108542ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:27:41.945485Z","caller":"traceutil/trace.go:172","msg":"trace[1733746361] transaction","detail":"{read_only:false; response_revision:1154; number_of_response:1; }","duration":"156.084191ms","start":"2025-10-17T19:27:41.789379Z","end":"2025-10-17T19:27:41.945463Z","steps":["trace[1733746361] 'process raft request'  (duration: 77.735531ms)","trace[1733746361] 'compare'  (duration: 78.15599ms)"],"step_count":2}
	
	
	==> gcp-auth [4b012af7a50a8d7a7a201239960e59b040263968ffd8451a94537a131b8dbf3a] <==
	2025/10/17 19:27:45 GCP Auth Webhook started!
	2025/10/17 19:28:04 Ready to marshal response ...
	2025/10/17 19:28:04 Ready to write response ...
	2025/10/17 19:28:04 Ready to marshal response ...
	2025/10/17 19:28:04 Ready to write response ...
	2025/10/17 19:28:05 Ready to marshal response ...
	2025/10/17 19:28:05 Ready to write response ...
	2025/10/17 19:28:23 Ready to marshal response ...
	2025/10/17 19:28:23 Ready to write response ...
	2025/10/17 19:28:24 Ready to marshal response ...
	2025/10/17 19:28:24 Ready to write response ...
	2025/10/17 19:28:24 Ready to marshal response ...
	2025/10/17 19:28:24 Ready to write response ...
	2025/10/17 19:28:27 Ready to marshal response ...
	2025/10/17 19:28:27 Ready to write response ...
	2025/10/17 19:28:28 Ready to marshal response ...
	2025/10/17 19:28:28 Ready to write response ...
	2025/10/17 19:28:38 Ready to marshal response ...
	2025/10/17 19:28:38 Ready to write response ...
	2025/10/17 19:28:52 Ready to marshal response ...
	2025/10/17 19:28:52 Ready to write response ...
	2025/10/17 19:30:57 Ready to marshal response ...
	2025/10/17 19:30:57 Ready to write response ...
	
	
	==> kernel <==
	 19:30:58 up  1:13,  0 user,  load average: 0.55, 1.33, 1.44
	Linux addons-808548 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb] <==
	I1017 19:28:49.529299       1 main.go:301] handling current node
	I1017 19:28:59.530867       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:28:59.530901       1 main.go:301] handling current node
	I1017 19:29:09.528552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:29:09.528614       1 main.go:301] handling current node
	I1017 19:29:19.527202       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:29:19.527226       1 main.go:301] handling current node
	I1017 19:29:29.528376       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:29:29.528419       1 main.go:301] handling current node
	I1017 19:29:39.528095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:29:39.528128       1 main.go:301] handling current node
	I1017 19:29:49.527838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:29:49.527877       1 main.go:301] handling current node
	I1017 19:29:59.527586       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:29:59.527632       1 main.go:301] handling current node
	I1017 19:30:09.535867       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:30:09.535900       1 main.go:301] handling current node
	I1017 19:30:19.527286       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:30:19.527317       1 main.go:301] handling current node
	I1017 19:30:29.537375       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:30:29.537404       1 main.go:301] handling current node
	I1017 19:30:39.526933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:30:39.527010       1 main.go:301] handling current node
	I1017 19:30:49.526911       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:30:49.526962       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14] <==
	 > logger="UnhandledError"
	E1017 19:27:13.604444       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.32.144:443: connect: connection refused" logger="UnhandledError"
	E1017 19:27:13.610179       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.32.144:443: connect: connection refused" logger="UnhandledError"
	W1017 19:27:14.606715       1 handler_proxy.go:99] no RequestInfo found in the context
	W1017 19:27:14.606715       1 handler_proxy.go:99] no RequestInfo found in the context
	E1017 19:27:14.606774       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1017 19:27:14.606794       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1017 19:27:14.606836       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1017 19:27:14.607970       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1017 19:27:18.692658       1 handler_proxy.go:99] no RequestInfo found in the context
	E1017 19:27:18.692659       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1017 19:27:18.692711       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1017 19:27:18.706917       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1017 19:28:13.137171       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58482: use of closed network connection
	E1017 19:28:13.291990       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58510: use of closed network connection
	I1017 19:28:28.625657       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1017 19:28:28.817922       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.183.48"}
	I1017 19:28:38.945687       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1017 19:30:57.210543       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.41.202"}
	
	
	==> kube-controller-manager [d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561] <==
	I1017 19:26:17.698696       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:26:17.698729       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 19:26:17.699130       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:26:17.699159       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 19:26:17.699377       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 19:26:17.699432       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:26:17.699446       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:26:17.699659       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:26:17.700833       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:26:17.702791       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 19:26:17.703452       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:17.706523       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:17.709833       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 19:26:17.719143       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1017 19:26:19.899301       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1017 19:26:47.710678       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1017 19:26:47.710869       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1017 19:26:47.710905       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1017 19:26:47.729644       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1017 19:26:47.733008       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1017 19:26:47.811440       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:47.833803       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:27:02.703974       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1017 19:27:17.817415       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1017 19:27:17.843488       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0] <==
	I1017 19:26:19.172277       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:26:19.524891       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:26:19.625842       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:26:19.625883       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:26:19.625995       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:26:19.754242       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:26:19.754375       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:26:19.777311       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:26:19.777826       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:26:19.778283       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:26:19.780327       1 config.go:200] "Starting service config controller"
	I1017 19:26:19.780389       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:26:19.780433       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:26:19.780458       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:26:19.780515       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:26:19.780541       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:26:19.781197       1 config.go:309] "Starting node config controller"
	I1017 19:26:19.781253       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:26:19.882652       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:26:19.883251       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:26:19.883357       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:26:19.883397       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a] <==
	E1017 19:26:11.015145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:26:11.015086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:26:11.015182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:26:11.015449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:26:11.015581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:26:11.015606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:26:11.015714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:26:11.015717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:26:11.015762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:26:11.014951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:26:11.015798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:26:11.015802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:26:11.015818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:26:11.015921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:26:11.015994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:26:11.016115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:26:11.016115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:26:11.856512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:26:11.861684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:26:11.901872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:26:11.905921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:26:11.946287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:26:11.996839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:26:11.996903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1017 19:26:14.613381       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:29:00 addons-808548 kubelet[1279]: I1017 19:29:00.750537    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^837bc86a-ab8f-11f0-b380-6ed16bd1a6d2\") pod \"5505bd99-9a12-463c-886f-0b14d3cb8d7b\" (UID: \"5505bd99-9a12-463c-886f-0b14d3cb8d7b\") "
	Oct 17 19:29:00 addons-808548 kubelet[1279]: I1017 19:29:00.750594    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dtrl\" (UniqueName: \"kubernetes.io/projected/5505bd99-9a12-463c-886f-0b14d3cb8d7b-kube-api-access-6dtrl\") pod \"5505bd99-9a12-463c-886f-0b14d3cb8d7b\" (UID: \"5505bd99-9a12-463c-886f-0b14d3cb8d7b\") "
	Oct 17 19:29:00 addons-808548 kubelet[1279]: I1017 19:29:00.750761    1279 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5505bd99-9a12-463c-886f-0b14d3cb8d7b-gcp-creds\") on node \"addons-808548\" DevicePath \"\""
	Oct 17 19:29:00 addons-808548 kubelet[1279]: I1017 19:29:00.753390    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5505bd99-9a12-463c-886f-0b14d3cb8d7b-kube-api-access-6dtrl" (OuterVolumeSpecName: "kube-api-access-6dtrl") pod "5505bd99-9a12-463c-886f-0b14d3cb8d7b" (UID: "5505bd99-9a12-463c-886f-0b14d3cb8d7b"). InnerVolumeSpecName "kube-api-access-6dtrl". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 17 19:29:00 addons-808548 kubelet[1279]: I1017 19:29:00.754114    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^837bc86a-ab8f-11f0-b380-6ed16bd1a6d2" (OuterVolumeSpecName: "task-pv-storage") pod "5505bd99-9a12-463c-886f-0b14d3cb8d7b" (UID: "5505bd99-9a12-463c-886f-0b14d3cb8d7b"). InnerVolumeSpecName "pvc-63a145f2-5ffe-4c36-9458-3a5805413430". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 17 19:29:00 addons-808548 kubelet[1279]: I1017 19:29:00.851379    1279 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-63a145f2-5ffe-4c36-9458-3a5805413430\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^837bc86a-ab8f-11f0-b380-6ed16bd1a6d2\") on node \"addons-808548\" "
	Oct 17 19:29:00 addons-808548 kubelet[1279]: I1017 19:29:00.851417    1279 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dtrl\" (UniqueName: \"kubernetes.io/projected/5505bd99-9a12-463c-886f-0b14d3cb8d7b-kube-api-access-6dtrl\") on node \"addons-808548\" DevicePath \"\""
	Oct 17 19:29:00 addons-808548 kubelet[1279]: I1017 19:29:00.856517    1279 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-63a145f2-5ffe-4c36-9458-3a5805413430" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^837bc86a-ab8f-11f0-b380-6ed16bd1a6d2") on node "addons-808548"
	Oct 17 19:29:00 addons-808548 kubelet[1279]: I1017 19:29:00.952226    1279 reconciler_common.go:299] "Volume detached for volume \"pvc-63a145f2-5ffe-4c36-9458-3a5805413430\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^837bc86a-ab8f-11f0-b380-6ed16bd1a6d2\") on node \"addons-808548\" DevicePath \"\""
	Oct 17 19:29:01 addons-808548 kubelet[1279]: I1017 19:29:01.096599    1279 scope.go:117] "RemoveContainer" containerID="f0905a0ac9d550555e3c7dbc498c6786e8040258b9fa03c7a5b61132565a2015"
	Oct 17 19:29:01 addons-808548 kubelet[1279]: I1017 19:29:01.107477    1279 scope.go:117] "RemoveContainer" containerID="f0905a0ac9d550555e3c7dbc498c6786e8040258b9fa03c7a5b61132565a2015"
	Oct 17 19:29:01 addons-808548 kubelet[1279]: E1017 19:29:01.107979    1279 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0905a0ac9d550555e3c7dbc498c6786e8040258b9fa03c7a5b61132565a2015\": container with ID starting with f0905a0ac9d550555e3c7dbc498c6786e8040258b9fa03c7a5b61132565a2015 not found: ID does not exist" containerID="f0905a0ac9d550555e3c7dbc498c6786e8040258b9fa03c7a5b61132565a2015"
	Oct 17 19:29:01 addons-808548 kubelet[1279]: I1017 19:29:01.108025    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0905a0ac9d550555e3c7dbc498c6786e8040258b9fa03c7a5b61132565a2015"} err="failed to get container status \"f0905a0ac9d550555e3c7dbc498c6786e8040258b9fa03c7a5b61132565a2015\": rpc error: code = NotFound desc = could not find container \"f0905a0ac9d550555e3c7dbc498c6786e8040258b9fa03c7a5b61132565a2015\": container with ID starting with f0905a0ac9d550555e3c7dbc498c6786e8040258b9fa03c7a5b61132565a2015 not found: ID does not exist"
	Oct 17 19:29:01 addons-808548 kubelet[1279]: I1017 19:29:01.390141    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5505bd99-9a12-463c-886f-0b14d3cb8d7b" path="/var/lib/kubelet/pods/5505bd99-9a12-463c-886f-0b14d3cb8d7b/volumes"
	Oct 17 19:29:02 addons-808548 kubelet[1279]: E1017 19:29:02.644480    1279 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-d7p4h" podUID="325a60a7-5f62-4ab1-9199-ac88319f2912"
	Oct 17 19:29:13 addons-808548 kubelet[1279]: I1017 19:29:13.420490    1279 scope.go:117] "RemoveContainer" containerID="81b4b2843a1008b20cf600a9734d116864da872b77fb840aff49b86e0148a69f"
	Oct 17 19:29:13 addons-808548 kubelet[1279]: I1017 19:29:13.431260    1279 scope.go:117] "RemoveContainer" containerID="bd1bb2c26a3de9d4505d91fb83f97ae960789ecec7dfb46554519ec56bc8fa3d"
	Oct 17 19:29:13 addons-808548 kubelet[1279]: I1017 19:29:13.440872    1279 scope.go:117] "RemoveContainer" containerID="2cefe4efd328a3b8fab07f304430aa301040427dedcc32467bd494ff0bca4d80"
	Oct 17 19:29:13 addons-808548 kubelet[1279]: I1017 19:29:13.451574    1279 scope.go:117] "RemoveContainer" containerID="6e6630ad6a4be2196ad872c81ac4fdc3fdcfa95ee1ca673b918842c71d8d3fdf"
	Oct 17 19:29:17 addons-808548 kubelet[1279]: I1017 19:29:17.176062    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-d7p4h" podStartSLOduration=175.702599488 podStartE2EDuration="2m58.17603824s" podCreationTimestamp="2025-10-17 19:26:19 +0000 UTC" firstStartedPulling="2025-10-17 19:29:14.411811217 +0000 UTC m=+181.106994171" lastFinishedPulling="2025-10-17 19:29:16.885249968 +0000 UTC m=+183.580432923" observedRunningTime="2025-10-17 19:29:17.174671772 +0000 UTC m=+183.869854766" watchObservedRunningTime="2025-10-17 19:29:17.17603824 +0000 UTC m=+183.871221204"
	Oct 17 19:29:55 addons-808548 kubelet[1279]: I1017 19:29:55.387853    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-qh9hh" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:29:58 addons-808548 kubelet[1279]: I1017 19:29:58.388062    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5gbvf" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:30:17 addons-808548 kubelet[1279]: I1017 19:30:17.387430    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-s9xrd" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:30:57 addons-808548 kubelet[1279]: I1017 19:30:57.223258    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6a3babf3-ffd0-4b91-b85a-029841f7dd87-gcp-creds\") pod \"hello-world-app-5d498dc89-c7zmx\" (UID: \"6a3babf3-ffd0-4b91-b85a-029841f7dd87\") " pod="default/hello-world-app-5d498dc89-c7zmx"
	Oct 17 19:30:57 addons-808548 kubelet[1279]: I1017 19:30:57.223327    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97pxz\" (UniqueName: \"kubernetes.io/projected/6a3babf3-ffd0-4b91-b85a-029841f7dd87-kube-api-access-97pxz\") pod \"hello-world-app-5d498dc89-c7zmx\" (UID: \"6a3babf3-ffd0-4b91-b85a-029841f7dd87\") " pod="default/hello-world-app-5d498dc89-c7zmx"
	
	
	==> storage-provisioner [00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32] <==
	W1017 19:30:33.079997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:35.083059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:35.087277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:37.090412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:37.094567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:39.098338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:39.103763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:41.106992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:41.111083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:43.114673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:43.119207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:45.122702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:45.127787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:47.132535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:47.136789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:49.139733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:49.144814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:51.148761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:51.153066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:53.156632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:53.160973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:55.163977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:55.168059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:57.171851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:30:57.176320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-808548 -n addons-808548
helpers_test.go:269: (dbg) Run:  kubectl --context addons-808548 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-8h4tr ingress-nginx-admission-patch-56ccn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-808548 describe pod ingress-nginx-admission-create-8h4tr ingress-nginx-admission-patch-56ccn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-808548 describe pod ingress-nginx-admission-create-8h4tr ingress-nginx-admission-patch-56ccn: exit status 1 (67.519024ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8h4tr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-56ccn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-808548 describe pod ingress-nginx-admission-create-8h4tr ingress-nginx-admission-patch-56ccn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (261.068187ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:30:59.882041  155297 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:30:59.882479  155297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:30:59.882501  155297 out.go:374] Setting ErrFile to fd 2...
	I1017 19:30:59.882508  155297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:30:59.882833  155297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:30:59.883296  155297 mustload.go:65] Loading cluster: addons-808548
	I1017 19:30:59.883909  155297 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:30:59.883940  155297 addons.go:606] checking whether the cluster is paused
	I1017 19:30:59.884098  155297 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:30:59.884123  155297 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:30:59.884801  155297 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:30:59.906504  155297 ssh_runner.go:195] Run: systemctl --version
	I1017 19:30:59.906599  155297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:30:59.926892  155297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:31:00.025974  155297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:31:00.026107  155297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:31:00.060182  155297 cri.go:89] found id: "e974c43b9027e157257f05ede1a3f86c839b07db82be9fb7c16ffae7189b011a"
	I1017 19:31:00.060210  155297 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:31:00.060217  155297 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:31:00.060221  155297 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:31:00.060224  155297 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:31:00.060227  155297 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:31:00.060229  155297 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:31:00.060232  155297 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:31:00.060234  155297 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:31:00.060246  155297 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:31:00.060248  155297 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:31:00.060251  155297 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:31:00.060253  155297 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:31:00.060255  155297 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:31:00.060258  155297 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:31:00.060264  155297 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:31:00.060267  155297 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:31:00.060271  155297 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:31:00.060273  155297 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:31:00.060290  155297 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:31:00.060296  155297 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:31:00.060298  155297 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:31:00.060301  155297 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:31:00.060303  155297 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:31:00.060306  155297 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:31:00.060308  155297 cri.go:89] found id: ""
	I1017 19:31:00.060352  155297 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:31:00.076140  155297 out.go:203] 
	W1017 19:31:00.077537  155297 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:31:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:31:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:31:00.077558  155297 out.go:285] * 
	* 
	W1017 19:31:00.080803  155297 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:31:00.082669  155297 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable ingress --alsologtostderr -v=1: exit status 11 (240.065067ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:31:00.133413  155359 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:31:00.133761  155359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:31:00.133774  155359 out.go:374] Setting ErrFile to fd 2...
	I1017 19:31:00.133780  155359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:31:00.134021  155359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:31:00.134332  155359 mustload.go:65] Loading cluster: addons-808548
	I1017 19:31:00.134715  155359 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:31:00.134735  155359 addons.go:606] checking whether the cluster is paused
	I1017 19:31:00.134879  155359 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:31:00.134896  155359 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:31:00.135314  155359 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:31:00.153699  155359 ssh_runner.go:195] Run: systemctl --version
	I1017 19:31:00.153782  155359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:31:00.173219  155359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:31:00.270798  155359 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:31:00.270867  155359 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:31:00.301553  155359 cri.go:89] found id: "e974c43b9027e157257f05ede1a3f86c839b07db82be9fb7c16ffae7189b011a"
	I1017 19:31:00.301586  155359 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:31:00.301590  155359 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:31:00.301593  155359 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:31:00.301596  155359 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:31:00.301600  155359 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:31:00.301603  155359 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:31:00.301605  155359 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:31:00.301608  155359 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:31:00.301617  155359 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:31:00.301620  155359 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:31:00.301623  155359 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:31:00.301625  155359 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:31:00.301627  155359 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:31:00.301630  155359 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:31:00.301636  155359 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:31:00.301642  155359 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:31:00.301657  155359 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:31:00.301662  155359 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:31:00.301664  155359 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:31:00.301667  155359 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:31:00.301669  155359 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:31:00.301672  155359 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:31:00.301675  155359 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:31:00.301678  155359 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:31:00.301680  155359 cri.go:89] found id: ""
	I1017 19:31:00.301729  155359 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:31:00.316350  155359 out.go:203] 
	W1017 19:31:00.317821  155359 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:31:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:31:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:31:00.317842  155359 out.go:285] * 
	* 
	W1017 19:31:00.320927  155359 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:31:00.322465  155359 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (151.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qzzq2" [3c693438-a98f-4edd-9167-2470572acb2d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003916475s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (246.417881ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:37.019254  151887 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:37.019525  151887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:37.019535  151887 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:37.019540  151887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:37.019756  151887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:37.020093  151887 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:37.020490  151887 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:37.020513  151887 addons.go:606] checking whether the cluster is paused
	I1017 19:28:37.020611  151887 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:37.020626  151887 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:37.021044  151887 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:37.040252  151887 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:37.040301  151887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:37.059304  151887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:37.157032  151887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:37.157111  151887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:37.188510  151887 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:37.188547  151887 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:37.188552  151887 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:37.188554  151887 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:37.188557  151887 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:37.188560  151887 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:37.188563  151887 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:37.188565  151887 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:37.188568  151887 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:37.188583  151887 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:37.188585  151887 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:37.188588  151887 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:37.188590  151887 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:37.188593  151887 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:37.188595  151887 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:37.188607  151887 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:37.188614  151887 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:37.188619  151887 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:37.188621  151887 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:37.188623  151887 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:37.188626  151887 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:37.188628  151887 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:37.188630  151887 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:37.188632  151887 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:37.188635  151887 cri.go:89] found id: ""
	I1017 19:28:37.188682  151887 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:37.203375  151887 out.go:203] 
	W1017 19:28:37.205027  151887 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:37.205047  151887 out.go:285] * 
	* 
	W1017 19:28:37.208069  151887 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:37.210156  151887 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.583465ms
I1017 19:28:13.552607  139217 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1017 19:28:13.552624  139217 kapi.go:107] duration metric: took 3.356982ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-q44mn" [8400f12c-e748-4220-a5b1-bd66d3cb4158] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00349775s
addons_test.go:463: (dbg) Run:  kubectl --context addons-808548 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (239.312647ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:18.659795  149851 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:18.660098  149851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:18.660108  149851 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:18.660112  149851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:18.660444  149851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:18.660725  149851 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:18.661102  149851 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:18.661118  149851 addons.go:606] checking whether the cluster is paused
	I1017 19:28:18.661193  149851 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:18.661205  149851 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:18.661571  149851 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:18.680258  149851 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:18.680323  149851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:18.700657  149851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:18.797586  149851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:18.797654  149851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:18.827914  149851 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:18.827935  149851 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:18.827939  149851 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:18.827953  149851 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:18.827970  149851 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:18.827973  149851 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:18.827976  149851 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:18.827979  149851 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:18.827984  149851 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:18.827993  149851 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:18.827999  149851 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:18.828002  149851 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:18.828005  149851 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:18.828007  149851 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:18.828010  149851 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:18.828018  149851 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:18.828023  149851 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:18.828027  149851 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:18.828029  149851 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:18.828031  149851 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:18.828033  149851 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:18.828036  149851 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:18.828038  149851 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:18.828040  149851 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:18.828048  149851 cri.go:89] found id: ""
	I1017 19:28:18.828085  149851 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:18.842690  149851 out.go:203] 
	W1017 19:28:18.844221  149851 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:18.844246  149851 out.go:285] * 
	* 
	W1017 19:28:18.847315  149851 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:18.849086  149851 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1017 19:28:13.549314  139217 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.365904ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-808548 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-808548 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c0dfb04b-55b6-4548-8d6a-84c1ee00b99d] Pending
helpers_test.go:352: "task-pv-pod" [c0dfb04b-55b6-4548-8d6a-84c1ee00b99d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c0dfb04b-55b6-4548-8d6a-84c1ee00b99d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003145902s
addons_test.go:572: (dbg) Run:  kubectl --context addons-808548 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-808548 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-808548 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-808548 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-808548 delete pod task-pv-pod: (1.012093888s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-808548 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-808548 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-808548 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [5505bd99-9a12-463c-886f-0b14d3cb8d7b] Pending
helpers_test.go:352: "task-pv-pod-restore" [5505bd99-9a12-463c-886f-0b14d3cb8d7b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [5505bd99-9a12-463c-886f-0b14d3cb8d7b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00373549s
addons_test.go:614: (dbg) Run:  kubectl --context addons-808548 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-808548 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-808548 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (239.656253ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:29:01.491849  152956 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:29:01.492151  152956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:29:01.492161  152956 out.go:374] Setting ErrFile to fd 2...
	I1017 19:29:01.492165  152956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:29:01.492358  152956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:29:01.492618  152956 mustload.go:65] Loading cluster: addons-808548
	I1017 19:29:01.493037  152956 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:29:01.493058  152956 addons.go:606] checking whether the cluster is paused
	I1017 19:29:01.493227  152956 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:29:01.493245  152956 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:29:01.493686  152956 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:29:01.511400  152956 ssh_runner.go:195] Run: systemctl --version
	I1017 19:29:01.511454  152956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:29:01.530226  152956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:29:01.626952  152956 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:29:01.627037  152956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:29:01.658576  152956 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:29:01.658613  152956 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:29:01.658617  152956 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:29:01.658621  152956 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:29:01.658624  152956 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:29:01.658627  152956 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:29:01.658630  152956 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:29:01.658632  152956 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:29:01.658634  152956 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:29:01.658639  152956 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:29:01.658644  152956 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:29:01.658646  152956 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:29:01.658649  152956 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:29:01.658651  152956 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:29:01.658653  152956 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:29:01.658660  152956 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:29:01.658665  152956 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:29:01.658670  152956 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:29:01.658672  152956 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:29:01.658674  152956 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:29:01.658677  152956 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:29:01.658680  152956 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:29:01.658682  152956 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:29:01.658685  152956 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:29:01.658687  152956 cri.go:89] found id: ""
	I1017 19:29:01.658732  152956 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:29:01.674199  152956 out.go:203] 
	W1017 19:29:01.675653  152956 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:29:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:29:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:29:01.675680  152956 out.go:285] * 
	* 
	W1017 19:29:01.678794  152956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:29:01.680240  152956 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (241.027206ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:29:01.731304  153017 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:29:01.731579  153017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:29:01.731588  153017 out.go:374] Setting ErrFile to fd 2...
	I1017 19:29:01.731593  153017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:29:01.731819  153017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:29:01.732103  153017 mustload.go:65] Loading cluster: addons-808548
	I1017 19:29:01.732451  153017 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:29:01.732468  153017 addons.go:606] checking whether the cluster is paused
	I1017 19:29:01.732544  153017 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:29:01.732555  153017 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:29:01.732947  153017 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:29:01.752058  153017 ssh_runner.go:195] Run: systemctl --version
	I1017 19:29:01.752128  153017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:29:01.771203  153017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:29:01.869201  153017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:29:01.869284  153017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:29:01.900550  153017 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:29:01.900577  153017 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:29:01.900585  153017 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:29:01.900591  153017 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:29:01.900594  153017 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:29:01.900597  153017 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:29:01.900600  153017 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:29:01.900603  153017 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:29:01.900606  153017 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:29:01.900612  153017 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:29:01.900615  153017 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:29:01.900617  153017 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:29:01.900620  153017 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:29:01.900623  153017 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:29:01.900626  153017 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:29:01.900635  153017 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:29:01.900643  153017 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:29:01.900648  153017 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:29:01.900650  153017 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:29:01.900653  153017 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:29:01.900655  153017 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:29:01.900657  153017 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:29:01.900660  153017 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:29:01.900662  153017 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:29:01.900664  153017 cri.go:89] found id: ""
	I1017 19:29:01.900703  153017 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:29:01.915304  153017 out.go:203] 
	W1017 19:29:01.916539  153017 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:29:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:29:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:29:01.916560  153017 out.go:285] * 
	* 
	W1017 19:29:01.919627  153017 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:29:01.921229  153017 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (48.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-808548 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-808548 --alsologtostderr -v=1: exit status 11 (254.64738ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:13.601700  148981 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:13.602051  148981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:13.602065  148981 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:13.602071  148981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:13.602342  148981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:13.602684  148981 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:13.603078  148981 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:13.603099  148981 addons.go:606] checking whether the cluster is paused
	I1017 19:28:13.603208  148981 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:13.603225  148981 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:13.603627  148981 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:13.623546  148981 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:13.623631  148981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:13.643262  148981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:13.744001  148981 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:13.744094  148981 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:13.774687  148981 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:13.774713  148981 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:13.774719  148981 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:13.774723  148981 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:13.774727  148981 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:13.774731  148981 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:13.774734  148981 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:13.774771  148981 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:13.774775  148981 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:13.774786  148981 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:13.774794  148981 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:13.774799  148981 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:13.774805  148981 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:13.774810  148981 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:13.774817  148981 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:13.774825  148981 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:13.774832  148981 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:13.774838  148981 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:13.774841  148981 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:13.774844  148981 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:13.774848  148981 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:13.774853  148981 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:13.774859  148981 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:13.774863  148981 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:13.774867  148981 cri.go:89] found id: ""
	I1017 19:28:13.774917  148981 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:13.790632  148981 out.go:203] 
	W1017 19:28:13.792217  148981 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:13.792240  148981 out.go:285] * 
	* 
	W1017 19:28:13.795240  148981 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:13.797087  148981 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-808548 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-808548
helpers_test.go:243: (dbg) docker inspect addons-808548:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89",
	        "Created": "2025-10-17T19:25:58.610025851Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 141183,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:25:58.653002983Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89/hostname",
	        "HostsPath": "/var/lib/docker/containers/8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89/hosts",
	        "LogPath": "/var/lib/docker/containers/8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89/8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89-json.log",
	        "Name": "/addons-808548",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-808548:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-808548",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8ba8a9320a550dd2b7e9e954e71dbc0d658b9e57c703b5e23b5a101a8b6ecf89",
	                "LowerDir": "/var/lib/docker/overlay2/0bbf6542911523bcf60aa175ebdc26146bf7f2dd177486aca0eb2c801bf3f352-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0bbf6542911523bcf60aa175ebdc26146bf7f2dd177486aca0eb2c801bf3f352/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0bbf6542911523bcf60aa175ebdc26146bf7f2dd177486aca0eb2c801bf3f352/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0bbf6542911523bcf60aa175ebdc26146bf7f2dd177486aca0eb2c801bf3f352/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-808548",
	                "Source": "/var/lib/docker/volumes/addons-808548/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-808548",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-808548",
	                "name.minikube.sigs.k8s.io": "addons-808548",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d88c341d13426cf6f42955cabbd4732e0f1d8e9c3b1f9f3690ab228f8efa3a5",
	            "SandboxKey": "/var/run/docker/netns/4d88c341d134",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-808548": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:cd:7d:bf:e8:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cd6e83943d7923cce77d4b5c86646887375a6d303d2552d2f1e760e4a6261218",
	                    "EndpointID": "02f6ce3169aa5061bf53b42b51b81b8c960732d144e806904f533987c937f989",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-808548",
	                        "8ba8a9320a55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-808548 -n addons-808548
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-808548 logs -n 25: (1.201985518s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-219122 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-219122   │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ delete  │ -p download-only-219122                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-219122   │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ start   │ -o=json --download-only -p download-only-893455 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-893455   │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ delete  │ -p download-only-893455                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-893455   │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ delete  │ -p download-only-219122                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-219122   │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ delete  │ -p download-only-893455                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-893455   │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ start   │ --download-only -p download-docker-414872 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-414872 │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │                     │
	│ delete  │ -p download-docker-414872                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-414872 │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ start   │ --download-only -p binary-mirror-524976 --alsologtostderr --binary-mirror http://127.0.0.1:38925 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-524976   │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │                     │
	│ delete  │ -p binary-mirror-524976                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-524976   │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ addons  │ disable dashboard -p addons-808548                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-808548          │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │                     │
	│ addons  │ enable dashboard -p addons-808548                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-808548          │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │                     │
	│ start   │ -p addons-808548 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-808548          │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:28 UTC │
	│ addons  │ addons-808548 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-808548          │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ addons-808548 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-808548          │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	│ addons  │ enable headlamp -p addons-808548 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-808548          │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:25:33
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:25:33.702059  140531 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:25:33.702302  140531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:25:33.702310  140531 out.go:374] Setting ErrFile to fd 2...
	I1017 19:25:33.702314  140531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:25:33.702542  140531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:25:33.703140  140531 out.go:368] Setting JSON to false
	I1017 19:25:33.704031  140531 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4082,"bootTime":1760725052,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:25:33.704133  140531 start.go:141] virtualization: kvm guest
	I1017 19:25:33.706399  140531 out.go:179] * [addons-808548] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:25:33.708105  140531 notify.go:220] Checking for updates...
	I1017 19:25:33.708153  140531 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 19:25:33.709762  140531 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:25:33.711490  140531 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 19:25:33.713131  140531 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 19:25:33.714643  140531 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:25:33.716093  140531 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:25:33.717999  140531 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:25:33.742798  140531 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:25:33.742906  140531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:25:33.801327  140531 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-17 19:25:33.791638879 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:25:33.801470  140531 docker.go:318] overlay module found
	I1017 19:25:33.803563  140531 out.go:179] * Using the docker driver based on user configuration
	I1017 19:25:33.805146  140531 start.go:305] selected driver: docker
	I1017 19:25:33.805166  140531 start.go:925] validating driver "docker" against <nil>
	I1017 19:25:33.805180  140531 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:25:33.805821  140531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:25:33.867277  140531 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-17 19:25:33.857612227 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:25:33.867449  140531 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:25:33.867724  140531 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:25:33.869810  140531 out.go:179] * Using Docker driver with root privileges
	I1017 19:25:33.871462  140531 cni.go:84] Creating CNI manager for ""
	I1017 19:25:33.871529  140531 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:25:33.871540  140531 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 19:25:33.871614  140531 start.go:349] cluster config:
	{Name:addons-808548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-808548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1017 19:25:33.873223  140531 out.go:179] * Starting "addons-808548" primary control-plane node in "addons-808548" cluster
	I1017 19:25:33.874801  140531 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:25:33.876158  140531 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:25:33.877358  140531 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:25:33.877405  140531 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:25:33.877417  140531 cache.go:58] Caching tarball of preloaded images
	I1017 19:25:33.877463  140531 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:25:33.877510  140531 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:25:33.877522  140531 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:25:33.877870  140531 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/config.json ...
	I1017 19:25:33.877899  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/config.json: {Name:mkaca1513894a0aae948fe803cc8ba28d52d6cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:25:33.894234  140531 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 19:25:33.894361  140531 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 19:25:33.894382  140531 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1017 19:25:33.894390  140531 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1017 19:25:33.894399  140531 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1017 19:25:33.894404  140531 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1017 19:25:46.520011  140531 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1017 19:25:46.520057  140531 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:25:46.520091  140531 start.go:360] acquireMachinesLock for addons-808548: {Name:mk65579f0f6a86b497afc62e2daab2619360d7ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:25:46.520203  140531 start.go:364] duration metric: took 90.409µs to acquireMachinesLock for "addons-808548"
	I1017 19:25:46.520228  140531 start.go:93] Provisioning new machine with config: &{Name:addons-808548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-808548 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:25:46.520317  140531 start.go:125] createHost starting for "" (driver="docker")
	I1017 19:25:46.522441  140531 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1017 19:25:46.522692  140531 start.go:159] libmachine.API.Create for "addons-808548" (driver="docker")
	I1017 19:25:46.522728  140531 client.go:168] LocalClient.Create starting
	I1017 19:25:46.522886  140531 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem
	I1017 19:25:46.629133  140531 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem
	I1017 19:25:46.958127  140531 cli_runner.go:164] Run: docker network inspect addons-808548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 19:25:46.975826  140531 cli_runner.go:211] docker network inspect addons-808548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 19:25:46.975916  140531 network_create.go:284] running [docker network inspect addons-808548] to gather additional debugging logs...
	I1017 19:25:46.975946  140531 cli_runner.go:164] Run: docker network inspect addons-808548
	W1017 19:25:46.993713  140531 cli_runner.go:211] docker network inspect addons-808548 returned with exit code 1
	I1017 19:25:46.993759  140531 network_create.go:287] error running [docker network inspect addons-808548]: docker network inspect addons-808548: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-808548 not found
	I1017 19:25:46.993777  140531 network_create.go:289] output of [docker network inspect addons-808548]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-808548 not found
	
	** /stderr **
	I1017 19:25:46.993905  140531 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:25:47.012503  140531 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00163ab20}
	I1017 19:25:47.012557  140531 network_create.go:124] attempt to create docker network addons-808548 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1017 19:25:47.012629  140531 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-808548 addons-808548
	I1017 19:25:47.072107  140531 network_create.go:108] docker network addons-808548 192.168.49.0/24 created
	I1017 19:25:47.072144  140531 kic.go:121] calculated static IP "192.168.49.2" for the "addons-808548" container
	I1017 19:25:47.072224  140531 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 19:25:47.089189  140531 cli_runner.go:164] Run: docker volume create addons-808548 --label name.minikube.sigs.k8s.io=addons-808548 --label created_by.minikube.sigs.k8s.io=true
	I1017 19:25:47.108479  140531 oci.go:103] Successfully created a docker volume addons-808548
	I1017 19:25:47.108600  140531 cli_runner.go:164] Run: docker run --rm --name addons-808548-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-808548 --entrypoint /usr/bin/test -v addons-808548:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 19:25:54.075981  140531 cli_runner.go:217] Completed: docker run --rm --name addons-808548-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-808548 --entrypoint /usr/bin/test -v addons-808548:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.967337197s)
	I1017 19:25:54.076027  140531 oci.go:107] Successfully prepared a docker volume addons-808548
	I1017 19:25:54.076071  140531 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:25:54.076102  140531 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 19:25:54.076170  140531 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-808548:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 19:25:58.534137  140531 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-808548:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.457922089s)
	I1017 19:25:58.534168  140531 kic.go:203] duration metric: took 4.458063007s to extract preloaded images to volume ...
	W1017 19:25:58.534446  140531 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 19:25:58.534523  140531 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 19:25:58.534583  140531 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 19:25:58.592371  140531 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-808548 --name addons-808548 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-808548 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-808548 --network addons-808548 --ip 192.168.49.2 --volume addons-808548:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 19:25:58.890300  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Running}}
	I1017 19:25:58.909243  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:25:58.928180  140531 cli_runner.go:164] Run: docker exec addons-808548 stat /var/lib/dpkg/alternatives/iptables
	I1017 19:25:58.978313  140531 oci.go:144] the created container "addons-808548" has a running status.
	I1017 19:25:58.978351  140531 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa...
	I1017 19:25:59.133207  140531 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 19:25:59.159672  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:25:59.185144  140531 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 19:25:59.185173  140531 kic_runner.go:114] Args: [docker exec --privileged addons-808548 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 19:25:59.243295  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:25:59.264667  140531 machine.go:93] provisionDockerMachine start ...
	I1017 19:25:59.264799  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:25:59.287032  140531 main.go:141] libmachine: Using SSH client type: native
	I1017 19:25:59.287374  140531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32889 <nil> <nil>}
	I1017 19:25:59.287396  140531 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:25:59.426879  140531 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-808548
	
	I1017 19:25:59.426911  140531 ubuntu.go:182] provisioning hostname "addons-808548"
	I1017 19:25:59.426976  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:25:59.446144  140531 main.go:141] libmachine: Using SSH client type: native
	I1017 19:25:59.446413  140531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32889 <nil> <nil>}
	I1017 19:25:59.446436  140531 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-808548 && echo "addons-808548" | sudo tee /etc/hostname
	I1017 19:25:59.594579  140531 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-808548
	
	I1017 19:25:59.594667  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:25:59.614344  140531 main.go:141] libmachine: Using SSH client type: native
	I1017 19:25:59.614626  140531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32889 <nil> <nil>}
	I1017 19:25:59.614651  140531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-808548' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-808548/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-808548' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:25:59.750778  140531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:25:59.750819  140531 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 19:25:59.750874  140531 ubuntu.go:190] setting up certificates
	I1017 19:25:59.750890  140531 provision.go:84] configureAuth start
	I1017 19:25:59.750946  140531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-808548
	I1017 19:25:59.768583  140531 provision.go:143] copyHostCerts
	I1017 19:25:59.768665  140531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 19:25:59.768831  140531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 19:25:59.768907  140531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 19:25:59.768961  140531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.addons-808548 san=[127.0.0.1 192.168.49.2 addons-808548 localhost minikube]
	I1017 19:25:59.872056  140531 provision.go:177] copyRemoteCerts
	I1017 19:25:59.872117  140531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:25:59.872153  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:25:59.890140  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:25:59.988467  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 19:26:00.009604  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:26:00.028169  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:26:00.047530  140531 provision.go:87] duration metric: took 296.620058ms to configureAuth
	I1017 19:26:00.047571  140531 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:26:00.047756  140531 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:26:00.047857  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:00.066462  140531 main.go:141] libmachine: Using SSH client type: native
	I1017 19:26:00.066677  140531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32889 <nil> <nil>}
	I1017 19:26:00.066696  140531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:26:00.319719  140531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:26:00.319764  140531 machine.go:96] duration metric: took 1.055065357s to provisionDockerMachine
	I1017 19:26:00.319779  140531 client.go:171] duration metric: took 13.79704377s to LocalClient.Create
	I1017 19:26:00.319795  140531 start.go:167] duration metric: took 13.797105592s to libmachine.API.Create "addons-808548"
	I1017 19:26:00.319803  140531 start.go:293] postStartSetup for "addons-808548" (driver="docker")
	I1017 19:26:00.319812  140531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:26:00.319863  140531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:26:00.319911  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:00.338666  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:00.438674  140531 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:26:00.442440  140531 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:26:00.442473  140531 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:26:00.442489  140531 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 19:26:00.442562  140531 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 19:26:00.442598  140531 start.go:296] duration metric: took 122.788114ms for postStartSetup
	I1017 19:26:00.443053  140531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-808548
	I1017 19:26:00.461136  140531 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/config.json ...
	I1017 19:26:00.461420  140531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:26:00.461465  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:00.480436  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:00.575236  140531 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:26:00.580154  140531 start.go:128] duration metric: took 14.059814057s to createHost
	I1017 19:26:00.580182  140531 start.go:83] releasing machines lock for "addons-808548", held for 14.059967201s
	I1017 19:26:00.580262  140531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-808548
	I1017 19:26:00.598196  140531 ssh_runner.go:195] Run: cat /version.json
	I1017 19:26:00.598259  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:00.598315  140531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:26:00.598418  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:00.616979  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:00.617607  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:00.765637  140531 ssh_runner.go:195] Run: systemctl --version
	I1017 19:26:00.772338  140531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:26:00.811240  140531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:26:00.816296  140531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:26:00.816375  140531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:26:00.844652  140531 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 19:26:00.844676  140531 start.go:495] detecting cgroup driver to use...
	I1017 19:26:00.844707  140531 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 19:26:00.844786  140531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:26:00.860778  140531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:26:00.874044  140531 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:26:00.874109  140531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:26:00.891423  140531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:26:00.910090  140531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:26:00.990423  140531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:26:01.079181  140531 docker.go:234] disabling docker service ...
	I1017 19:26:01.079259  140531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:26:01.099718  140531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:26:01.113539  140531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:26:01.197576  140531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:26:01.282449  140531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:26:01.295997  140531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:26:01.311384  140531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:26:01.311448  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.323160  140531 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 19:26:01.323227  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.333122  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.342803  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.352540  140531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:26:01.361778  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.371558  140531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.386774  140531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:26:01.396473  140531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:26:01.404758  140531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:26:01.412679  140531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:26:01.492206  140531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:26:01.598856  140531 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:26:01.598932  140531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:26:01.603314  140531 start.go:563] Will wait 60s for crictl version
	I1017 19:26:01.603381  140531 ssh_runner.go:195] Run: which crictl
	I1017 19:26:01.607469  140531 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:26:01.633262  140531 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:26:01.633370  140531 ssh_runner.go:195] Run: crio --version
	I1017 19:26:01.663013  140531 ssh_runner.go:195] Run: crio --version
	I1017 19:26:01.693534  140531 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:26:01.695074  140531 cli_runner.go:164] Run: docker network inspect addons-808548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:26:01.712397  140531 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:26:01.716843  140531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:26:01.728267  140531 kubeadm.go:883] updating cluster {Name:addons-808548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-808548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:26:01.728387  140531 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:26:01.728435  140531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:26:01.761040  140531 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:26:01.761063  140531 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:26:01.761113  140531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:26:01.787916  140531 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:26:01.787941  140531 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:26:01.787949  140531 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:26:01.788037  140531 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-808548 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-808548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:26:01.788103  140531 ssh_runner.go:195] Run: crio config
	I1017 19:26:01.835602  140531 cni.go:84] Creating CNI manager for ""
	I1017 19:26:01.835633  140531 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:26:01.835657  140531 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:26:01.835685  140531 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-808548 NodeName:addons-808548 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:26:01.835874  140531 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-808548"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:26:01.835953  140531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:26:01.844400  140531 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:26:01.844471  140531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:26:01.852769  140531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:26:01.865783  140531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:26:01.882589  140531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1017 19:26:01.895872  140531 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:26:01.899694  140531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:26:01.910422  140531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:26:01.989983  140531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:26:02.015299  140531 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548 for IP: 192.168.49.2
	I1017 19:26:02.015327  140531 certs.go:195] generating shared ca certs ...
	I1017 19:26:02.015354  140531 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.015520  140531 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 19:26:02.193219  140531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt ...
	I1017 19:26:02.193252  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt: {Name:mkfc088070143abbd0f930c07946609512d7ef36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.193436  140531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key ...
	I1017 19:26:02.193448  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key: {Name:mkaa7e58b0af7a6942d2615741dff1bed8e2be43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.193525  140531 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 19:26:02.409464  140531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt ...
	I1017 19:26:02.409499  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt: {Name:mk2e3f8e8d70d69eb6b5b9f14918e8b1168d78ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.409671  140531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key ...
	I1017 19:26:02.409687  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key: {Name:mkcb1da175e68492f6a06b0defa317fba200f634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.409791  140531 certs.go:257] generating profile certs ...
	I1017 19:26:02.409859  140531 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.key
	I1017 19:26:02.409875  140531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt with IP's: []
	I1017 19:26:02.619553  140531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt ...
	I1017 19:26:02.619587  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: {Name:mk665ca13c7fdca90358a51795e776aa2181e3ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.619770  140531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.key ...
	I1017 19:26:02.619782  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.key: {Name:mk92a5b10d31d3914366c137af2c424e55c73bfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.619860  140531 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key.82446dd2
	I1017 19:26:02.619881  140531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt.82446dd2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1017 19:26:02.945779  140531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt.82446dd2 ...
	I1017 19:26:02.945820  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt.82446dd2: {Name:mk7cd72641b8baf28c795da2bb5867be4971f6d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.946006  140531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key.82446dd2 ...
	I1017 19:26:02.946019  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key.82446dd2: {Name:mkb6f19c6773ac83b8e937425fcc7f0a377d682c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:02.946101  140531 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt.82446dd2 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt
	I1017 19:26:02.946183  140531 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key.82446dd2 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key
	I1017 19:26:02.946259  140531 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.key
	I1017 19:26:02.946277  140531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.crt with IP's: []
	I1017 19:26:03.017201  140531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.crt ...
	I1017 19:26:03.017234  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.crt: {Name:mke571d81ce4e4b4899edc553a51d0cad4d1f265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:03.017398  140531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.key ...
	I1017 19:26:03.017411  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.key: {Name:mk53a17cf441dd5672ed895c266ecfd7051a21f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:03.017580  140531 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:26:03.017618  140531 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 19:26:03.017642  140531 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:26:03.017665  140531 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 19:26:03.018233  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:26:03.037338  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 19:26:03.056438  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:26:03.075692  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:26:03.094642  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 19:26:03.112993  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:26:03.131370  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:26:03.150550  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 19:26:03.169440  140531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:26:03.190973  140531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:26:03.205611  140531 ssh_runner.go:195] Run: openssl version
	I1017 19:26:03.212027  140531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:26:03.224093  140531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:26:03.228242  140531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:26:03.228355  140531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:26:03.263058  140531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:26:03.272337  140531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:26:03.276253  140531 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 19:26:03.276304  140531 kubeadm.go:400] StartCluster: {Name:addons-808548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-808548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:26:03.276395  140531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:26:03.276452  140531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:26:03.304781  140531 cri.go:89] found id: ""
	I1017 19:26:03.304870  140531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:26:03.313357  140531 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 19:26:03.321754  140531 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 19:26:03.321846  140531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 19:26:03.329794  140531 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 19:26:03.329831  140531 kubeadm.go:157] found existing configuration files:
	
	I1017 19:26:03.329873  140531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 19:26:03.337716  140531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 19:26:03.337798  140531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 19:26:03.345677  140531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 19:26:03.353811  140531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 19:26:03.353896  140531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 19:26:03.361937  140531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 19:26:03.369889  140531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 19:26:03.369954  140531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 19:26:03.378097  140531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 19:26:03.386480  140531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 19:26:03.386545  140531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 19:26:03.394561  140531 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 19:26:03.453845  140531 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 19:26:03.511106  140531 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 19:26:14.159330  140531 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 19:26:14.159395  140531 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 19:26:14.159507  140531 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 19:26:14.159597  140531 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 19:26:14.159651  140531 kubeadm.go:318] OS: Linux
	I1017 19:26:14.159709  140531 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 19:26:14.159807  140531 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 19:26:14.159881  140531 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 19:26:14.159965  140531 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 19:26:14.160026  140531 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 19:26:14.160101  140531 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 19:26:14.160153  140531 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 19:26:14.160196  140531 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 19:26:14.160305  140531 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 19:26:14.160395  140531 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 19:26:14.160478  140531 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 19:26:14.160584  140531 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 19:26:14.162556  140531 out.go:252]   - Generating certificates and keys ...
	I1017 19:26:14.162628  140531 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 19:26:14.162686  140531 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 19:26:14.162769  140531 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 19:26:14.162827  140531 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 19:26:14.162911  140531 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 19:26:14.162978  140531 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 19:26:14.163078  140531 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 19:26:14.163259  140531 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-808548 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 19:26:14.163338  140531 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 19:26:14.163508  140531 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-808548 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 19:26:14.163603  140531 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 19:26:14.163685  140531 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 19:26:14.163727  140531 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 19:26:14.163830  140531 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 19:26:14.163884  140531 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 19:26:14.163952  140531 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 19:26:14.164009  140531 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 19:26:14.164067  140531 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 19:26:14.164112  140531 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 19:26:14.164176  140531 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 19:26:14.164234  140531 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 19:26:14.165674  140531 out.go:252]   - Booting up control plane ...
	I1017 19:26:14.165786  140531 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:26:14.165853  140531 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:26:14.165908  140531 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:26:14.165993  140531 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:26:14.166095  140531 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 19:26:14.166184  140531 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 19:26:14.166259  140531 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:26:14.166302  140531 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:26:14.166421  140531 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 19:26:14.166522  140531 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 19:26:14.166597  140531 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001758969s
	I1017 19:26:14.166679  140531 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 19:26:14.166772  140531 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1017 19:26:14.166849  140531 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 19:26:14.166941  140531 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 19:26:14.167025  140531 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.307991069s
	I1017 19:26:14.167096  140531 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.770583697s
	I1017 19:26:14.167155  140531 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501807006s
	I1017 19:26:14.167264  140531 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 19:26:14.167402  140531 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 19:26:14.167467  140531 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 19:26:14.167640  140531 kubeadm.go:318] [mark-control-plane] Marking the node addons-808548 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 19:26:14.167754  140531 kubeadm.go:318] [bootstrap-token] Using token: me1c77.otz9569wj37o7b0e
	I1017 19:26:14.169390  140531 out.go:252]   - Configuring RBAC rules ...
	I1017 19:26:14.169514  140531 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:26:14.169629  140531 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:26:14.169885  140531 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:26:14.170005  140531 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:26:14.170096  140531 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:26:14.170164  140531 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:26:14.170272  140531 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:26:14.170329  140531 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:26:14.170392  140531 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:26:14.170402  140531 kubeadm.go:318] 
	I1017 19:26:14.170507  140531 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:26:14.170528  140531 kubeadm.go:318] 
	I1017 19:26:14.170630  140531 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:26:14.170642  140531 kubeadm.go:318] 
	I1017 19:26:14.170677  140531 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:26:14.170778  140531 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:26:14.170849  140531 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:26:14.170860  140531 kubeadm.go:318] 
	I1017 19:26:14.170932  140531 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:26:14.170948  140531 kubeadm.go:318] 
	I1017 19:26:14.171023  140531 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:26:14.171039  140531 kubeadm.go:318] 
	I1017 19:26:14.171107  140531 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:26:14.171174  140531 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:26:14.171234  140531 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:26:14.171241  140531 kubeadm.go:318] 
	I1017 19:26:14.171307  140531 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:26:14.171373  140531 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:26:14.171379  140531 kubeadm.go:318] 
	I1017 19:26:14.171452  140531 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token me1c77.otz9569wj37o7b0e \
	I1017 19:26:14.171569  140531 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 \
	I1017 19:26:14.171590  140531 kubeadm.go:318] 	--control-plane 
	I1017 19:26:14.171596  140531 kubeadm.go:318] 
	I1017 19:26:14.171677  140531 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:26:14.171692  140531 kubeadm.go:318] 
	I1017 19:26:14.171805  140531 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token me1c77.otz9569wj37o7b0e \
	I1017 19:26:14.171986  140531 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 
	I1017 19:26:14.172007  140531 cni.go:84] Creating CNI manager for ""
	I1017 19:26:14.172020  140531 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:26:14.174104  140531 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 19:26:14.175857  140531 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 19:26:14.180863  140531 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 19:26:14.180886  140531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 19:26:14.193843  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 19:26:14.410253  140531 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 19:26:14.410330  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:14.410361  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-808548 minikube.k8s.io/updated_at=2025_10_17T19_26_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=addons-808548 minikube.k8s.io/primary=true
	I1017 19:26:14.421885  140531 ops.go:34] apiserver oom_adj: -16
	I1017 19:26:14.495250  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:14.995943  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:15.495970  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:15.995397  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:16.495418  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:16.996296  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:17.495816  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:17.995420  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:18.496090  140531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:26:18.561677  140531 kubeadm.go:1113] duration metric: took 4.151412842s to wait for elevateKubeSystemPrivileges
	I1017 19:26:18.561719  140531 kubeadm.go:402] duration metric: took 15.285419539s to StartCluster
	I1017 19:26:18.561930  140531 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:18.562097  140531 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 19:26:18.562718  140531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:26:18.563013  140531 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:26:18.563197  140531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 19:26:18.563196  140531 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1017 19:26:18.563460  140531 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:26:18.563532  140531 addons.go:69] Setting ingress=true in profile "addons-808548"
	I1017 19:26:18.563544  140531 addons.go:69] Setting yakd=true in profile "addons-808548"
	I1017 19:26:18.563554  140531 addons.go:238] Setting addon ingress=true in "addons-808548"
	I1017 19:26:18.563531  140531 addons.go:69] Setting metrics-server=true in profile "addons-808548"
	I1017 19:26:18.563577  140531 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-808548"
	I1017 19:26:18.563584  140531 addons.go:238] Setting addon metrics-server=true in "addons-808548"
	I1017 19:26:18.563601  140531 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-808548"
	I1017 19:26:18.563619  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.563634  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.563801  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.563879  140531 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-808548"
	I1017 19:26:18.563904  140531 addons.go:69] Setting ingress-dns=true in profile "addons-808548"
	I1017 19:26:18.563923  140531 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-808548"
	I1017 19:26:18.563926  140531 addons.go:238] Setting addon ingress-dns=true in "addons-808548"
	I1017 19:26:18.563946  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.563955  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.564206  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.564256  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.564306  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.564389  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.564415  140531 addons.go:69] Setting default-storageclass=true in profile "addons-808548"
	I1017 19:26:18.564456  140531 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-808548"
	I1017 19:26:18.564565  140531 addons.go:69] Setting gcp-auth=true in profile "addons-808548"
	I1017 19:26:18.564791  140531 mustload.go:65] Loading cluster: addons-808548
	I1017 19:26:18.565116  140531 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:26:18.565181  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.565420  140531 addons.go:69] Setting cloud-spanner=true in profile "addons-808548"
	I1017 19:26:18.563568  140531 addons.go:238] Setting addon yakd=true in "addons-808548"
	I1017 19:26:18.565456  140531 addons.go:238] Setting addon cloud-spanner=true in "addons-808548"
	I1017 19:26:18.565521  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.565541  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.565879  140531 out.go:179] * Verifying Kubernetes components...
	I1017 19:26:18.566048  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.565515  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.566446  140531 addons.go:69] Setting storage-provisioner=true in profile "addons-808548"
	I1017 19:26:18.566476  140531 addons.go:238] Setting addon storage-provisioner=true in "addons-808548"
	I1017 19:26:18.566512  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.566619  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.567454  140531 addons.go:69] Setting registry=true in profile "addons-808548"
	I1017 19:26:18.567475  140531 addons.go:238] Setting addon registry=true in "addons-808548"
	I1017 19:26:18.567543  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.567690  140531 addons.go:69] Setting inspektor-gadget=true in profile "addons-808548"
	I1017 19:26:18.567707  140531 addons.go:238] Setting addon inspektor-gadget=true in "addons-808548"
	I1017 19:26:18.567732  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.568338  140531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:26:18.568792  140531 addons.go:69] Setting registry-creds=true in profile "addons-808548"
	I1017 19:26:18.569091  140531 addons.go:238] Setting addon registry-creds=true in "addons-808548"
	I1017 19:26:18.569173  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.569465  140531 addons.go:69] Setting volumesnapshots=true in profile "addons-808548"
	I1017 19:26:18.571682  140531 addons.go:238] Setting addon volumesnapshots=true in "addons-808548"
	I1017 19:26:18.571690  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.571726  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.569432  140531 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-808548"
	I1017 19:26:18.572069  140531 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-808548"
	I1017 19:26:18.572102  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.564766  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.572243  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.571035  140531 addons.go:69] Setting volcano=true in profile "addons-808548"
	I1017 19:26:18.571009  140531 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-808548"
	I1017 19:26:18.572555  140531 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-808548"
	I1017 19:26:18.572640  140531 addons.go:238] Setting addon volcano=true in "addons-808548"
	I1017 19:26:18.573867  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.573959  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.574114  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.576337  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.576384  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.577035  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.577155  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.651777  140531 addons.go:238] Setting addon default-storageclass=true in "addons-808548"
	I1017 19:26:18.653278  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.654489  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1017 19:26:18.654489  140531 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1017 19:26:18.655577  140531 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1017 19:26:18.656139  140531 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1017 19:26:18.656171  140531 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1017 19:26:18.656255  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.658428  140531 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1017 19:26:18.658505  140531 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1017 19:26:18.658597  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	W1017 19:26:18.659181  140531 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1017 19:26:18.659767  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.659931  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1017 19:26:18.661822  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1017 19:26:18.663313  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1017 19:26:18.683650  140531 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1017 19:26:18.684614  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.689776  140531 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 19:26:18.689880  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1017 19:26:18.690002  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.690230  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1017 19:26:18.690614  140531 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:26:18.690638  140531 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1017 19:26:18.690665  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1017 19:26:18.693577  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1017 19:26:18.693612  140531 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1017 19:26:18.693683  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.697384  140531 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1017 19:26:18.697477  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1017 19:26:18.697564  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.697451  140531 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1017 19:26:18.700282  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1017 19:26:18.701073  140531 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 19:26:18.701097  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1017 19:26:18.701205  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.718515  140531 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1017 19:26:18.721291  140531 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:26:18.721360  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1017 19:26:18.721650  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.723425  140531 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:26:18.723639  140531 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1017 19:26:18.723716  140531 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1017 19:26:18.725134  140531 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:26:18.725160  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:26:18.725235  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.726040  140531 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 19:26:18.726058  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1017 19:26:18.726059  140531 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 19:26:18.726073  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1017 19:26:18.726115  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.726124  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.726407  140531 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 19:26:18.726420  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1017 19:26:18.726465  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.726694  140531 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1017 19:26:18.728993  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.731698  140531 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-808548"
	I1017 19:26:18.731868  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:18.731896  140531 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1017 19:26:18.731871  140531 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1017 19:26:18.732872  140531 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1017 19:26:18.732895  140531 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1017 19:26:18.732952  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.733951  140531 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:26:18.733973  140531 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:26:18.734034  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.735840  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:18.738822  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1017 19:26:18.738849  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1017 19:26:18.738920  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.742480  140531 out.go:179]   - Using image docker.io/registry:3.0.0
	I1017 19:26:18.742706  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.743767  140531 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1017 19:26:18.743793  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1017 19:26:18.743846  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.747179  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.747493  140531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 19:26:18.781627  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.788611  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.796819  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.797132  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.799787  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.799990  140531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:26:18.801015  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.801643  140531 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1017 19:26:18.804590  140531 out.go:179]   - Using image docker.io/busybox:stable
	I1017 19:26:18.806021  140531 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 19:26:18.806088  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1017 19:26:18.806165  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:18.806553  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.810049  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	W1017 19:26:18.810555  140531 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 19:26:18.810600  140531 retry.go:31] will retry after 157.494373ms: ssh: handshake failed: EOF
	I1017 19:26:18.819482  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.826149  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:18.852511  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	W1017 19:26:18.853534  140531 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 19:26:18.853567  140531 retry.go:31] will retry after 130.786477ms: ssh: handshake failed: EOF
	I1017 19:26:18.948001  140531 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1017 19:26:18.948035  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1017 19:26:18.950989  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 19:26:18.955801  140531 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:18.955834  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1017 19:26:18.991288  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:26:19.008091  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:19.011320  140531 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1017 19:26:19.011352  140531 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1017 19:26:19.018006  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 19:26:19.021361  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1017 19:26:19.021394  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1017 19:26:19.023582  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:26:19.024091  140531 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1017 19:26:19.024109  140531 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1017 19:26:19.029794  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 19:26:19.030132  140531 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1017 19:26:19.030153  140531 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1017 19:26:19.045335  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1017 19:26:19.062471  140531 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 19:26:19.062507  140531 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1017 19:26:19.064715  140531 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1017 19:26:19.064753  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1017 19:26:19.071176  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1017 19:26:19.074015  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1017 19:26:19.081498  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 19:26:19.083804  140531 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1017 19:26:19.083847  140531 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1017 19:26:19.085532  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 19:26:19.107339  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1017 19:26:19.109139  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 19:26:19.114938  140531 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1017 19:26:19.116966  140531 node_ready.go:35] waiting up to 6m0s for node "addons-808548" to be "Ready" ...
	I1017 19:26:19.145301  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1017 19:26:19.145413  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1017 19:26:19.150775  140531 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1017 19:26:19.150873  140531 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1017 19:26:19.177105  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1017 19:26:19.177226  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1017 19:26:19.214116  140531 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1017 19:26:19.214222  140531 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1017 19:26:19.221410  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 19:26:19.223800  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1017 19:26:19.223827  140531 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1017 19:26:19.236560  140531 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1017 19:26:19.236591  140531 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1017 19:26:19.279500  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1017 19:26:19.279529  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1017 19:26:19.280517  140531 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1017 19:26:19.280542  140531 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1017 19:26:19.298881  140531 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:26:19.298912  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1017 19:26:19.320207  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1017 19:26:19.320261  140531 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1017 19:26:19.352188  140531 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1017 19:26:19.352338  140531 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1017 19:26:19.357325  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1017 19:26:19.357355  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1017 19:26:19.385818  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:26:19.425916  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1017 19:26:19.425945  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1017 19:26:19.426080  140531 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1017 19:26:19.426096  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1017 19:26:19.476869  140531 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 19:26:19.476895  140531 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1017 19:26:19.506583  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1017 19:26:19.529656  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 19:26:19.624448  140531 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-808548" context rescaled to 1 replicas
	W1017 19:26:19.963893  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:19.964012  140531 retry.go:31] will retry after 213.922627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:20.178217  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:20.271920  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.186345725s)
	I1017 19:26:20.271964  140531 addons.go:479] Verifying addon ingress=true in "addons-808548"
	I1017 19:26:20.272006  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.164616301s)
	I1017 19:26:20.272045  140531 addons.go:479] Verifying addon registry=true in "addons-808548"
	I1017 19:26:20.272071  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.162898391s)
	I1017 19:26:20.272357  140531 addons.go:479] Verifying addon metrics-server=true in "addons-808548"
	I1017 19:26:20.272101  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.050664714s)
	I1017 19:26:20.273925  140531 out.go:179] * Verifying registry addon...
	I1017 19:26:20.274069  140531 out.go:179] * Verifying ingress addon...
	I1017 19:26:20.277095  140531 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1017 19:26:20.277190  140531 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1017 19:26:20.281175  140531 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 19:26:20.281201  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:20.281813  140531 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 19:26:20.780433  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:20.780560  140531 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 19:26:20.780578  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:20.828641  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.442694894s)
	I1017 19:26:20.828720  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.322030376s)
	W1017 19:26:20.828788  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 19:26:20.828824  140531 retry.go:31] will retry after 213.515572ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 19:26:20.828952  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.299248686s)
	I1017 19:26:20.828981  140531 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-808548"
	I1017 19:26:20.830404  140531 out.go:179] * Verifying csi-hostpath-driver addon...
	I1017 19:26:20.830403  140531 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-808548 service yakd-dashboard -n yakd-dashboard
	
	I1017 19:26:20.833120  140531 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1017 19:26:20.838455  140531 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 19:26:20.838483  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:26:20.909677  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:20.909708  140531 retry.go:31] will retry after 188.633823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:21.043002  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:26:21.098951  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 19:26:21.120375  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:21.280865  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:21.280917  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:21.381726  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:21.781123  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:21.781347  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:21.836866  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:22.280818  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:22.281019  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:22.381909  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:22.780654  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:22.780895  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:22.836380  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:23.280546  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:23.280629  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:23.381178  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:23.550659  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.50760184s)
	I1017 19:26:23.550792  140531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.451801938s)
	W1017 19:26:23.550833  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:23.550861  140531 retry.go:31] will retry after 842.659034ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 19:26:23.620532  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:23.781093  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:23.781119  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:23.836832  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:24.281242  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:24.281304  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:24.382678  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:24.393799  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:24.780395  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:24.780526  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:24.836200  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:26:24.940639  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:24.940669  140531 retry.go:31] will retry after 1.108621186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:25.280790  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:25.280881  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:25.381553  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:25.780349  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:25.780430  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:25.836045  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:26.050282  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 19:26:26.120590  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:26.280842  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:26.280935  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:26.301268  140531 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1017 19:26:26.301355  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:26.321119  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:26:26.381596  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:26.432919  140531 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1017 19:26:26.446549  140531 addons.go:238] Setting addon gcp-auth=true in "addons-808548"
	I1017 19:26:26.446616  140531 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:26:26.447210  140531 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:26:26.468126  140531 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1017 19:26:26.468182  140531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:26:26.486572  140531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	W1017 19:26:26.610657  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:26.610699  140531 retry.go:31] will retry after 649.773545ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:26.612871  140531 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:26:26.614437  140531 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1017 19:26:26.615922  140531 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1017 19:26:26.615946  140531 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1017 19:26:26.630619  140531 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1017 19:26:26.630643  140531 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1017 19:26:26.644951  140531 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 19:26:26.644979  140531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1017 19:26:26.659313  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 19:26:26.780999  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:26.781194  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:26.836993  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:26.979531  140531 addons.go:479] Verifying addon gcp-auth=true in "addons-808548"
	I1017 19:26:26.981406  140531 out.go:179] * Verifying gcp-auth addon...
	I1017 19:26:26.985530  140531 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1017 19:26:26.988515  140531 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1017 19:26:26.988538  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:27.261285  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:27.280659  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:27.280867  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:27.336909  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:27.488807  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:27.781427  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:27.781644  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 19:26:27.815672  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:27.815717  140531 retry.go:31] will retry after 2.501516396s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:27.878215  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:27.989098  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:28.280641  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:28.280701  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:28.336226  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:28.489540  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:28.620399  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:28.780715  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:28.780862  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:28.836209  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:28.989168  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:29.280640  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:29.281031  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:29.336587  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:29.488394  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:29.780480  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:29.780557  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:29.836038  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:29.988832  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:30.280512  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:30.280759  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:30.317406  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:30.336687  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:30.489005  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:30.780566  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:30.780805  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:30.836336  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:26:30.871049  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:30.871088  140531 retry.go:31] will retry after 1.557214415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:30.988887  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:31.120575  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:31.280689  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:31.280719  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:31.336785  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:31.488927  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:31.780753  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:31.780801  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:31.836641  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:31.989272  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:32.280080  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:32.280129  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:32.337089  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:32.429225  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:32.489014  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:32.780570  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:32.780963  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:32.835995  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:26:32.975492  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:32.975536  140531 retry.go:31] will retry after 5.233525697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:32.989061  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:33.280446  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:33.280528  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:33.336063  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:33.489120  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:33.620933  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:33.779920  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:33.780179  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:33.836545  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:33.988763  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:34.280345  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:34.280385  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:34.336125  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:34.489290  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:34.780442  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:34.780599  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:34.836202  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:34.989202  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:35.280899  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:35.280956  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:35.336459  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:35.489273  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:35.783242  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:35.783391  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:35.835945  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:35.988758  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:36.120629  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:36.280392  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:36.280490  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:36.335908  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:36.490692  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:36.780189  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:36.780250  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:36.835801  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:36.988676  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:37.280385  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:37.280453  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:37.336136  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:37.489133  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:37.780682  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:37.780764  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:37.836474  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:37.988167  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:38.209894  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:38.280504  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:38.280590  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:38.336845  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:38.489077  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:38.619726  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	W1017 19:26:38.755654  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:38.755685  140531 retry.go:31] will retry after 4.412965899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:38.780592  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:38.780661  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:38.836084  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:38.988782  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:39.280191  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:39.280332  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:39.335790  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:39.488853  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:39.780411  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:39.780492  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:39.836139  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:39.988895  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:40.280058  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:40.280178  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:40.336706  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:40.488807  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:40.620217  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:40.779760  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:40.779813  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:40.836313  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:40.988937  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:41.280243  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:41.280343  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:41.335732  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:41.488683  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:41.780567  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:41.780773  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:41.836334  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:41.989425  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:42.280327  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:42.280420  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:42.336378  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:42.489508  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:42.621386  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:42.780449  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:42.780656  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:42.836093  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:42.988993  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:43.169962  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:26:43.282163  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:43.282174  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:43.336562  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:43.489234  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:43.726377  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:43.726412  140531 retry.go:31] will retry after 13.373427082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:43.780157  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:43.780216  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:43.836846  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:43.988603  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:44.280450  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:44.280451  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:44.336139  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:44.489338  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:44.780513  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:44.780668  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:44.836100  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:44.989289  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:45.119759  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:45.280858  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:45.280899  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:45.336923  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:45.489057  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:45.780593  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:45.780828  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:45.836849  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:45.988677  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:46.280317  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:46.280463  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:46.336000  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:46.488953  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:46.780224  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:46.780335  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:46.835872  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:46.988602  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:47.120532  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:47.280374  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:47.280673  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:47.336124  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:47.488895  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:47.780268  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:47.780455  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:47.836204  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:47.988967  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:48.280467  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:48.280670  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:48.336237  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:48.489101  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:48.780015  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:48.780211  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:48.836547  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:48.988235  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:49.279859  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:49.279947  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:49.336469  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:49.489406  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:49.620021  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:49.780121  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:49.780120  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:49.836603  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:49.988286  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:50.280818  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:50.280930  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:50.336813  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:50.488551  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:50.780031  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:50.780188  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:50.836867  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:50.988687  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:51.280725  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:51.280906  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:51.336658  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:51.488192  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:51.780162  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:51.780162  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:51.837133  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:51.989059  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:52.120811  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:52.280502  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:52.280794  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:52.336713  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:52.488458  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:52.780480  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:52.780633  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:52.836438  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:52.989431  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:53.279865  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:53.280004  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:53.336808  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:53.488552  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:53.780407  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:53.780540  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:53.836323  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:53.989420  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:54.280203  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:54.280370  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:54.335759  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:54.488330  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:54.619974  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:54.781034  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:54.781153  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:54.836544  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:54.989342  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:55.279864  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:55.280037  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:55.336532  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:55.488519  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:55.780539  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:55.780583  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:55.836124  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:55.988804  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:56.280226  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:56.280382  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:56.336215  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:56.489316  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:56.780819  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:56.780884  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:56.836685  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:56.988239  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:57.100485  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 19:26:57.120657  140531 node_ready.go:57] node "addons-808548" has "Ready":"False" status (will retry)
	I1017 19:26:57.281495  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:57.281994  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:57.336583  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:57.488577  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:26:57.649711  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:57.649759  140531 retry.go:31] will retry after 18.492124279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:26:57.780772  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:57.780865  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:57.837066  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:57.990378  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:58.280453  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:58.280463  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:58.336345  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:58.489046  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:58.780096  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:58.780258  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:58.835880  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:58.988716  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:59.280356  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:59.280595  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:59.336164  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:59.489048  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:26:59.619292  140531 node_ready.go:49] node "addons-808548" is "Ready"
	I1017 19:26:59.619327  140531 node_ready.go:38] duration metric: took 40.502324802s for node "addons-808548" to be "Ready" ...
	I1017 19:26:59.619345  140531 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:26:59.619412  140531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:59.638966  140531 api_server.go:72] duration metric: took 41.075907287s to wait for apiserver process to appear ...
	I1017 19:26:59.639000  140531 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:26:59.639027  140531 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:26:59.648609  140531 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:26:59.651252  140531 api_server.go:141] control plane version: v1.34.1
	I1017 19:26:59.651293  140531 api_server.go:131] duration metric: took 12.283788ms to wait for apiserver health ...
	I1017 19:26:59.651304  140531 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:26:59.656229  140531 system_pods.go:59] 18 kube-system pods found
	I1017 19:26:59.656275  140531 system_pods.go:61] "amd-gpu-device-plugin-s9xrd" [b9ac4437-8f9f-4841-8858-358c218c25d2] Pending
	I1017 19:26:59.656285  140531 system_pods.go:61] "coredns-66bc5c9577-q7x6k" [f02e0ef6-42d8-4b0a-89a9-10488d5307dc] Pending
	I1017 19:26:59.656291  140531 system_pods.go:61] "csi-hostpath-attacher-0" [261c7cef-97b8-4198-9dfc-1693023dbcef] Pending
	I1017 19:26:59.656297  140531 system_pods.go:61] "csi-hostpath-resizer-0" [07450e65-29e6-43a9-80e7-f120cfccdb8e] Pending
	I1017 19:26:59.656302  140531 system_pods.go:61] "csi-hostpathplugin-srnfw" [62107854-6ddd-4530-82c2-823bcdaca289] Pending
	I1017 19:26:59.656307  140531 system_pods.go:61] "etcd-addons-808548" [df715b91-c74e-47d9-a49a-1669ba943c1e] Running
	I1017 19:26:59.656313  140531 system_pods.go:61] "kindnet-lwg6r" [e578c681-a2ec-4dd1-ab3e-b7ee9ed0ab7f] Running
	I1017 19:26:59.656320  140531 system_pods.go:61] "kube-apiserver-addons-808548" [89f97c7f-8789-4788-9e8e-bc061735d572] Running
	I1017 19:26:59.656325  140531 system_pods.go:61] "kube-controller-manager-addons-808548" [4c9c180a-be95-45ca-afe4-7de80c8b224e] Running
	I1017 19:26:59.656330  140531 system_pods.go:61] "kube-ingress-dns-minikube" [7233f829-bd06-422c-9013-7f76f4faf35d] Pending
	I1017 19:26:59.656339  140531 system_pods.go:61] "kube-proxy-ck6l7" [50768f34-51f0-440e-8651-5f2711d813c3] Running
	I1017 19:26:59.656344  140531 system_pods.go:61] "kube-scheduler-addons-808548" [b9521dd4-c933-4e87-afa5-505f31f56de8] Running
	I1017 19:26:59.656349  140531 system_pods.go:61] "metrics-server-85b7d694d7-q44mn" [8400f12c-e748-4220-a5b1-bd66d3cb4158] Pending
	I1017 19:26:59.656358  140531 system_pods.go:61] "registry-6b586f9694-ns7g9" [eacf9d9f-262f-4bd2-b0a0-f13212de3b0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:26:59.656365  140531 system_pods.go:61] "registry-creds-764b6fb674-d7p4h" [325a60a7-5f62-4ab1-9199-ac88319f2912] Pending
	I1017 19:26:59.656372  140531 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q75kr" [ad278525-ed53-4721-8b84-7f0e01657dd5] Pending
	I1017 19:26:59.656376  140531 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qpq25" [c0ce171d-6556-40a1-bc02-fd2db1cb57e3] Pending
	I1017 19:26:59.656381  140531 system_pods.go:61] "storage-provisioner" [00412528-a403-437c-8a95-82e04747a24b] Pending
	I1017 19:26:59.656388  140531 system_pods.go:74] duration metric: took 5.077305ms to wait for pod list to return data ...
	I1017 19:26:59.656398  140531 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:26:59.660505  140531 default_sa.go:45] found service account: "default"
	I1017 19:26:59.660540  140531 default_sa.go:55] duration metric: took 4.134125ms for default service account to be created ...
	I1017 19:26:59.660555  140531 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:26:59.663896  140531 system_pods.go:86] 19 kube-system pods found
	I1017 19:26:59.663931  140531 system_pods.go:89] "amd-gpu-device-plugin-s9xrd" [b9ac4437-8f9f-4841-8858-358c218c25d2] Pending
	I1017 19:26:59.663939  140531 system_pods.go:89] "coredns-66bc5c9577-q7x6k" [f02e0ef6-42d8-4b0a-89a9-10488d5307dc] Pending
	I1017 19:26:59.663945  140531 system_pods.go:89] "csi-hostpath-attacher-0" [261c7cef-97b8-4198-9dfc-1693023dbcef] Pending
	I1017 19:26:59.663950  140531 system_pods.go:89] "csi-hostpath-resizer-0" [07450e65-29e6-43a9-80e7-f120cfccdb8e] Pending
	I1017 19:26:59.663954  140531 system_pods.go:89] "csi-hostpathplugin-srnfw" [62107854-6ddd-4530-82c2-823bcdaca289] Pending
	I1017 19:26:59.663959  140531 system_pods.go:89] "etcd-addons-808548" [df715b91-c74e-47d9-a49a-1669ba943c1e] Running
	I1017 19:26:59.663965  140531 system_pods.go:89] "kindnet-lwg6r" [e578c681-a2ec-4dd1-ab3e-b7ee9ed0ab7f] Running
	I1017 19:26:59.663970  140531 system_pods.go:89] "kube-apiserver-addons-808548" [89f97c7f-8789-4788-9e8e-bc061735d572] Running
	I1017 19:26:59.663976  140531 system_pods.go:89] "kube-controller-manager-addons-808548" [4c9c180a-be95-45ca-afe4-7de80c8b224e] Running
	I1017 19:26:59.663989  140531 system_pods.go:89] "kube-ingress-dns-minikube" [7233f829-bd06-422c-9013-7f76f4faf35d] Pending
	I1017 19:26:59.663994  140531 system_pods.go:89] "kube-proxy-ck6l7" [50768f34-51f0-440e-8651-5f2711d813c3] Running
	I1017 19:26:59.663999  140531 system_pods.go:89] "kube-scheduler-addons-808548" [b9521dd4-c933-4e87-afa5-505f31f56de8] Running
	I1017 19:26:59.664004  140531 system_pods.go:89] "metrics-server-85b7d694d7-q44mn" [8400f12c-e748-4220-a5b1-bd66d3cb4158] Pending
	I1017 19:26:59.664008  140531 system_pods.go:89] "nvidia-device-plugin-daemonset-qh9hh" [5874d0fa-f0c2-4888-8ea5-7dda59b9164e] Pending
	I1017 19:26:59.664018  140531 system_pods.go:89] "registry-6b586f9694-ns7g9" [eacf9d9f-262f-4bd2-b0a0-f13212de3b0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:26:59.664106  140531 system_pods.go:89] "registry-creds-764b6fb674-d7p4h" [325a60a7-5f62-4ab1-9199-ac88319f2912] Pending
	I1017 19:26:59.664126  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q75kr" [ad278525-ed53-4721-8b84-7f0e01657dd5] Pending
	I1017 19:26:59.664132  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qpq25" [c0ce171d-6556-40a1-bc02-fd2db1cb57e3] Pending
	I1017 19:26:59.664138  140531 system_pods.go:89] "storage-provisioner" [00412528-a403-437c-8a95-82e04747a24b] Pending
	I1017 19:26:59.664159  140531 retry.go:31] will retry after 206.226157ms: missing components: kube-dns
	I1017 19:26:59.779945  140531 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 19:26:59.779968  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:26:59.779963  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:26:59.836096  140531 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 19:26:59.836126  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:26:59.878734  140531 system_pods.go:86] 20 kube-system pods found
	I1017 19:26:59.878790  140531 system_pods.go:89] "amd-gpu-device-plugin-s9xrd" [b9ac4437-8f9f-4841-8858-358c218c25d2] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 19:26:59.878801  140531 system_pods.go:89] "coredns-66bc5c9577-q7x6k" [f02e0ef6-42d8-4b0a-89a9-10488d5307dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:26:59.878809  140531 system_pods.go:89] "csi-hostpath-attacher-0" [261c7cef-97b8-4198-9dfc-1693023dbcef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:26:59.878815  140531 system_pods.go:89] "csi-hostpath-resizer-0" [07450e65-29e6-43a9-80e7-f120cfccdb8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:26:59.878820  140531 system_pods.go:89] "csi-hostpathplugin-srnfw" [62107854-6ddd-4530-82c2-823bcdaca289] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:26:59.878825  140531 system_pods.go:89] "etcd-addons-808548" [df715b91-c74e-47d9-a49a-1669ba943c1e] Running
	I1017 19:26:59.878830  140531 system_pods.go:89] "kindnet-lwg6r" [e578c681-a2ec-4dd1-ab3e-b7ee9ed0ab7f] Running
	I1017 19:26:59.878836  140531 system_pods.go:89] "kube-apiserver-addons-808548" [89f97c7f-8789-4788-9e8e-bc061735d572] Running
	I1017 19:26:59.878840  140531 system_pods.go:89] "kube-controller-manager-addons-808548" [4c9c180a-be95-45ca-afe4-7de80c8b224e] Running
	I1017 19:26:59.878850  140531 system_pods.go:89] "kube-ingress-dns-minikube" [7233f829-bd06-422c-9013-7f76f4faf35d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:26:59.878859  140531 system_pods.go:89] "kube-proxy-ck6l7" [50768f34-51f0-440e-8651-5f2711d813c3] Running
	I1017 19:26:59.878863  140531 system_pods.go:89] "kube-scheduler-addons-808548" [b9521dd4-c933-4e87-afa5-505f31f56de8] Running
	I1017 19:26:59.878868  140531 system_pods.go:89] "metrics-server-85b7d694d7-q44mn" [8400f12c-e748-4220-a5b1-bd66d3cb4158] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:26:59.878877  140531 system_pods.go:89] "nvidia-device-plugin-daemonset-qh9hh" [5874d0fa-f0c2-4888-8ea5-7dda59b9164e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:26:59.878885  140531 system_pods.go:89] "registry-6b586f9694-ns7g9" [eacf9d9f-262f-4bd2-b0a0-f13212de3b0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:26:59.878890  140531 system_pods.go:89] "registry-creds-764b6fb674-d7p4h" [325a60a7-5f62-4ab1-9199-ac88319f2912] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:26:59.878897  140531 system_pods.go:89] "registry-proxy-5gbvf" [0f8d0ee8-125b-4765-824e-19053a0dcfe6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:26:59.878911  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q75kr" [ad278525-ed53-4721-8b84-7f0e01657dd5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:26:59.878924  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qpq25" [c0ce171d-6556-40a1-bc02-fd2db1cb57e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:26:59.878936  140531 system_pods.go:89] "storage-provisioner" [00412528-a403-437c-8a95-82e04747a24b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:26:59.878956  140531 retry.go:31] will retry after 264.802509ms: missing components: kube-dns
	I1017 19:26:59.989766  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:00.149947  140531 system_pods.go:86] 20 kube-system pods found
	I1017 19:27:00.149986  140531 system_pods.go:89] "amd-gpu-device-plugin-s9xrd" [b9ac4437-8f9f-4841-8858-358c218c25d2] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 19:27:00.149998  140531 system_pods.go:89] "coredns-66bc5c9577-q7x6k" [f02e0ef6-42d8-4b0a-89a9-10488d5307dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:27:00.150007  140531 system_pods.go:89] "csi-hostpath-attacher-0" [261c7cef-97b8-4198-9dfc-1693023dbcef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:27:00.150015  140531 system_pods.go:89] "csi-hostpath-resizer-0" [07450e65-29e6-43a9-80e7-f120cfccdb8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:27:00.150023  140531 system_pods.go:89] "csi-hostpathplugin-srnfw" [62107854-6ddd-4530-82c2-823bcdaca289] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:27:00.150030  140531 system_pods.go:89] "etcd-addons-808548" [df715b91-c74e-47d9-a49a-1669ba943c1e] Running
	I1017 19:27:00.150037  140531 system_pods.go:89] "kindnet-lwg6r" [e578c681-a2ec-4dd1-ab3e-b7ee9ed0ab7f] Running
	I1017 19:27:00.150048  140531 system_pods.go:89] "kube-apiserver-addons-808548" [89f97c7f-8789-4788-9e8e-bc061735d572] Running
	I1017 19:27:00.150053  140531 system_pods.go:89] "kube-controller-manager-addons-808548" [4c9c180a-be95-45ca-afe4-7de80c8b224e] Running
	I1017 19:27:00.150060  140531 system_pods.go:89] "kube-ingress-dns-minikube" [7233f829-bd06-422c-9013-7f76f4faf35d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:27:00.150065  140531 system_pods.go:89] "kube-proxy-ck6l7" [50768f34-51f0-440e-8651-5f2711d813c3] Running
	I1017 19:27:00.150071  140531 system_pods.go:89] "kube-scheduler-addons-808548" [b9521dd4-c933-4e87-afa5-505f31f56de8] Running
	I1017 19:27:00.150079  140531 system_pods.go:89] "metrics-server-85b7d694d7-q44mn" [8400f12c-e748-4220-a5b1-bd66d3cb4158] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:27:00.150092  140531 system_pods.go:89] "nvidia-device-plugin-daemonset-qh9hh" [5874d0fa-f0c2-4888-8ea5-7dda59b9164e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:27:00.150100  140531 system_pods.go:89] "registry-6b586f9694-ns7g9" [eacf9d9f-262f-4bd2-b0a0-f13212de3b0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:27:00.150108  140531 system_pods.go:89] "registry-creds-764b6fb674-d7p4h" [325a60a7-5f62-4ab1-9199-ac88319f2912] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:27:00.150116  140531 system_pods.go:89] "registry-proxy-5gbvf" [0f8d0ee8-125b-4765-824e-19053a0dcfe6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:27:00.150132  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q75kr" [ad278525-ed53-4721-8b84-7f0e01657dd5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:27:00.150147  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qpq25" [c0ce171d-6556-40a1-bc02-fd2db1cb57e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:27:00.150160  140531 system_pods.go:89] "storage-provisioner" [00412528-a403-437c-8a95-82e04747a24b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:27:00.150186  140531 retry.go:31] will retry after 402.374722ms: missing components: kube-dns
	I1017 19:27:00.280978  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:00.281065  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:00.336895  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:00.488983  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:00.561719  140531 system_pods.go:86] 20 kube-system pods found
	I1017 19:27:00.561779  140531 system_pods.go:89] "amd-gpu-device-plugin-s9xrd" [b9ac4437-8f9f-4841-8858-358c218c25d2] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 19:27:00.561789  140531 system_pods.go:89] "coredns-66bc5c9577-q7x6k" [f02e0ef6-42d8-4b0a-89a9-10488d5307dc] Running
	I1017 19:27:00.561800  140531 system_pods.go:89] "csi-hostpath-attacher-0" [261c7cef-97b8-4198-9dfc-1693023dbcef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:27:00.561810  140531 system_pods.go:89] "csi-hostpath-resizer-0" [07450e65-29e6-43a9-80e7-f120cfccdb8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:27:00.561823  140531 system_pods.go:89] "csi-hostpathplugin-srnfw" [62107854-6ddd-4530-82c2-823bcdaca289] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:27:00.561830  140531 system_pods.go:89] "etcd-addons-808548" [df715b91-c74e-47d9-a49a-1669ba943c1e] Running
	I1017 19:27:00.561836  140531 system_pods.go:89] "kindnet-lwg6r" [e578c681-a2ec-4dd1-ab3e-b7ee9ed0ab7f] Running
	I1017 19:27:00.561843  140531 system_pods.go:89] "kube-apiserver-addons-808548" [89f97c7f-8789-4788-9e8e-bc061735d572] Running
	I1017 19:27:00.561849  140531 system_pods.go:89] "kube-controller-manager-addons-808548" [4c9c180a-be95-45ca-afe4-7de80c8b224e] Running
	I1017 19:27:00.561857  140531 system_pods.go:89] "kube-ingress-dns-minikube" [7233f829-bd06-422c-9013-7f76f4faf35d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:27:00.561862  140531 system_pods.go:89] "kube-proxy-ck6l7" [50768f34-51f0-440e-8651-5f2711d813c3] Running
	I1017 19:27:00.561868  140531 system_pods.go:89] "kube-scheduler-addons-808548" [b9521dd4-c933-4e87-afa5-505f31f56de8] Running
	I1017 19:27:00.561878  140531 system_pods.go:89] "metrics-server-85b7d694d7-q44mn" [8400f12c-e748-4220-a5b1-bd66d3cb4158] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:27:00.561887  140531 system_pods.go:89] "nvidia-device-plugin-daemonset-qh9hh" [5874d0fa-f0c2-4888-8ea5-7dda59b9164e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:27:00.561901  140531 system_pods.go:89] "registry-6b586f9694-ns7g9" [eacf9d9f-262f-4bd2-b0a0-f13212de3b0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:27:00.561909  140531 system_pods.go:89] "registry-creds-764b6fb674-d7p4h" [325a60a7-5f62-4ab1-9199-ac88319f2912] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:27:00.561921  140531 system_pods.go:89] "registry-proxy-5gbvf" [0f8d0ee8-125b-4765-824e-19053a0dcfe6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:27:00.561931  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q75kr" [ad278525-ed53-4721-8b84-7f0e01657dd5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:27:00.561940  140531 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qpq25" [c0ce171d-6556-40a1-bc02-fd2db1cb57e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:27:00.561945  140531 system_pods.go:89] "storage-provisioner" [00412528-a403-437c-8a95-82e04747a24b] Running
	I1017 19:27:00.561961  140531 system_pods.go:126] duration metric: took 901.397753ms to wait for k8s-apps to be running ...
	I1017 19:27:00.561971  140531 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:27:00.562033  140531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:27:00.582658  140531 system_svc.go:56] duration metric: took 20.678305ms WaitForService to wait for kubelet
	I1017 19:27:00.582684  140531 kubeadm.go:586] duration metric: took 42.019634517s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:27:00.582705  140531 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:27:00.585722  140531 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 19:27:00.585765  140531 node_conditions.go:123] node cpu capacity is 8
	I1017 19:27:00.585785  140531 node_conditions.go:105] duration metric: took 3.075104ms to run NodePressure ...
	I1017 19:27:00.585800  140531 start.go:241] waiting for startup goroutines ...
	I1017 19:27:00.781389  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:00.781422  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:00.837033  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:00.988435  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:01.280356  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:01.280458  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:01.335963  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:01.488598  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:01.780881  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:01.780931  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:01.836889  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:01.988574  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:02.281344  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:02.281339  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:02.382430  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:02.490240  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:02.780500  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:02.780630  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:02.836889  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:02.988545  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:03.281176  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:03.281201  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:03.337874  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:03.489251  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:03.781302  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:03.781365  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:03.836674  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:03.989420  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:04.280916  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:04.281110  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:04.336706  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:04.489507  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:04.780555  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:04.780702  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:04.836556  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:04.989353  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:05.280431  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:05.280473  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:05.336634  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:05.490178  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:05.781129  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:05.781133  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:05.837528  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:05.989284  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:06.281113  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:06.281113  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:06.336992  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:06.488766  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:06.782141  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:06.782204  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:06.837094  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:06.988882  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:07.281577  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:07.281832  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:07.337404  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:07.489852  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:07.782366  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:07.782415  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:07.837400  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:07.989456  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:08.281180  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:08.281235  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:08.337506  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:08.489314  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:08.780881  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:08.780958  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:08.837215  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:08.989126  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:09.280298  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:09.280340  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:09.336719  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:09.488846  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:09.781497  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:09.781814  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:09.837290  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:09.989400  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:10.281165  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:10.281192  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:10.406382  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:10.489548  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:10.780693  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:10.780811  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:10.836779  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:10.988478  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:11.281172  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:11.281178  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:11.382238  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:11.489061  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:11.780842  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:11.780861  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:11.837605  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:11.989851  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:12.281822  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:12.281859  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:12.382658  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:12.490028  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:12.780298  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:12.780335  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:12.837155  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:12.988897  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:13.281067  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:13.281187  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:13.337329  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:13.489092  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:13.780894  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:13.781077  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:13.837570  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:13.988633  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:14.281695  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:14.281770  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:14.337097  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:14.488917  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:14.781769  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:14.781895  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:14.836895  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:14.989023  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:15.281347  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:15.281483  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:15.336337  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:15.489039  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:15.781416  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:15.781495  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:15.836356  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:15.989377  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:16.142691  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:27:16.280716  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:16.280780  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:16.336575  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:16.489336  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:16.781649  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:16.782770  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 19:27:16.806263  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:27:16.806303  140531 retry.go:31] will retry after 18.104254162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:27:16.837721  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:16.989574  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:17.281559  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:17.283339  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:17.337772  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:17.489321  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:17.782349  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:17.783374  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:17.838067  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:17.989013  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:18.283374  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:18.283578  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:18.431981  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:18.490396  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:18.785642  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:18.786808  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:18.884270  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:18.989162  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:19.281456  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:19.281499  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:19.336692  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:19.488545  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:19.780765  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:19.780905  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:19.837428  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:19.989665  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:20.281092  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:20.281268  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:20.337421  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:20.489394  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:20.781067  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:20.781313  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:20.837278  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:20.989267  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:21.280631  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:21.280730  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:21.337191  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:21.489152  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:21.781449  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:21.781656  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:21.836566  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:21.989315  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:22.280869  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:22.280898  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:22.382666  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:22.489435  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:22.781207  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:22.781258  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:22.837364  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:22.989184  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:23.280255  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:23.280279  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:23.337578  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:23.489680  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:23.781176  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:23.781181  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:23.837607  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:23.988683  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:24.307313  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:24.308083  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:24.488082  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:24.488978  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:24.819960  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:24.820160  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:24.837009  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:24.989057  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:25.280551  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:25.280616  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:25.336957  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:25.488963  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:25.781388  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:25.781566  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:25.882230  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:25.988478  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:26.280921  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:26.281214  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:26.339162  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:26.488936  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:26.781294  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:26.781550  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:26.836502  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:26.989319  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:27.280639  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:27.280730  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:27.336424  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:27.489217  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:27.781331  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:27.781957  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:27.837533  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:27.989521  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:28.280997  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:28.281545  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:28.337473  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:28.488960  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:28.781169  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:28.781320  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:28.837612  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:28.989613  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:29.281203  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:29.281222  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:29.337604  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:29.489569  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:29.780855  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:29.781125  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:29.882154  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:29.989177  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:30.280690  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:30.280864  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:30.382036  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:30.488156  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:30.780021  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:30.780227  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:30.836986  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:30.989013  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:31.280897  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:31.281002  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:31.336803  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:31.489629  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:31.781334  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:31.781415  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:31.882536  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:31.989414  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:32.280652  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:32.280923  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:32.337636  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:32.490497  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:32.781203  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:32.781239  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:32.837117  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:32.993243  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:33.280826  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:33.280843  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:33.337080  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:33.489023  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:33.780038  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:33.780136  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:33.837196  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:33.988825  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:34.280880  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:34.280902  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:34.337447  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:34.489595  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:34.781642  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:34.781811  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:34.837203  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:34.911299  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:27:34.989319  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:35.280715  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:27:35.280729  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:35.337816  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:35.488749  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:27:35.501343  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:27:35.501378  140531 retry.go:31] will retry after 26.661352304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:27:35.783508  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:35.786943  140531 kapi.go:107] duration metric: took 1m15.509748146s to wait for kubernetes.io/minikube-addons=registry ...
	I1017 19:27:35.838240  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:35.991678  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:36.281359  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:36.336894  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:36.490509  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:36.781438  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:36.836507  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:36.989418  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:37.281241  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:37.337408  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:37.488672  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:37.780726  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:37.836988  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:37.988877  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:38.281002  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:38.336928  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:38.489039  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:38.780706  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:38.837398  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:38.989557  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:39.280637  140531 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:27:39.381883  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:39.488470  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:39.781335  140531 kapi.go:107] duration metric: took 1m19.504237672s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1017 19:27:39.836373  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:39.989616  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:40.337221  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:40.489874  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:40.837198  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:40.989447  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:41.337215  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:41.489307  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:41.947283  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:41.988398  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:42.337272  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:42.489703  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:42.837197  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:42.988649  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:43.337276  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:43.488960  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:43.836191  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:43.988808  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:44.337443  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:44.494479  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:44.837496  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:44.989462  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:45.337053  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:45.488778  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:27:45.837681  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:45.989461  140531 kapi.go:107] duration metric: took 1m19.003928203s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1017 19:27:45.992155  140531 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-808548 cluster.
	I1017 19:27:45.994005  140531 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1017 19:27:45.995595  140531 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1017 19:27:46.336977  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:46.836728  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:47.338984  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:47.836897  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:48.337520  140531 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:27:48.836613  140531 kapi.go:107] duration metric: took 1m28.003487052s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1017 19:28:02.166341  140531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 19:28:02.725320  140531 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 19:28:02.725447  140531 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1017 19:28:02.727360  140531 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, registry-creds, amd-gpu-device-plugin, default-storageclass, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1017 19:28:02.728897  140531 addons.go:514] duration metric: took 1m44.165697583s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner registry-creds amd-gpu-device-plugin default-storageclass metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1017 19:28:02.728993  140531 start.go:246] waiting for cluster config update ...
	I1017 19:28:02.729019  140531 start.go:255] writing updated cluster config ...
	I1017 19:28:02.729333  140531 ssh_runner.go:195] Run: rm -f paused
	I1017 19:28:02.734175  140531 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:28:02.738215  140531 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q7x6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.742650  140531 pod_ready.go:94] pod "coredns-66bc5c9577-q7x6k" is "Ready"
	I1017 19:28:02.742677  140531 pod_ready.go:86] duration metric: took 4.437848ms for pod "coredns-66bc5c9577-q7x6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.745069  140531 pod_ready.go:83] waiting for pod "etcd-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.750076  140531 pod_ready.go:94] pod "etcd-addons-808548" is "Ready"
	I1017 19:28:02.750125  140531 pod_ready.go:86] duration metric: took 5.029215ms for pod "etcd-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.752734  140531 pod_ready.go:83] waiting for pod "kube-apiserver-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.757147  140531 pod_ready.go:94] pod "kube-apiserver-addons-808548" is "Ready"
	I1017 19:28:02.757175  140531 pod_ready.go:86] duration metric: took 4.397202ms for pod "kube-apiserver-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:02.759289  140531 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:03.137700  140531 pod_ready.go:94] pod "kube-controller-manager-addons-808548" is "Ready"
	I1017 19:28:03.137733  140531 pod_ready.go:86] duration metric: took 378.417325ms for pod "kube-controller-manager-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:03.338519  140531 pod_ready.go:83] waiting for pod "kube-proxy-ck6l7" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:03.737956  140531 pod_ready.go:94] pod "kube-proxy-ck6l7" is "Ready"
	I1017 19:28:03.737985  140531 pod_ready.go:86] duration metric: took 399.429394ms for pod "kube-proxy-ck6l7" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:03.938699  140531 pod_ready.go:83] waiting for pod "kube-scheduler-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:04.338129  140531 pod_ready.go:94] pod "kube-scheduler-addons-808548" is "Ready"
	I1017 19:28:04.338159  140531 pod_ready.go:86] duration metric: took 399.433782ms for pod "kube-scheduler-addons-808548" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:28:04.338174  140531 pod_ready.go:40] duration metric: took 1.603941826s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:28:04.384487  140531 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 19:28:04.386821  140531 out.go:179] * Done! kubectl is now configured to use "addons-808548" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:28:05 addons-808548 crio[769]: time="2025-10-17T19:28:05.300658183Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 19:28:07 addons-808548 crio[769]: time="2025-10-17T19:28:07.259283808Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=dd3c9188-6940-443f-9460-1a63fb757a9e name=/runtime.v1.ImageService/PullImage
	Oct 17 19:28:07 addons-808548 crio[769]: time="2025-10-17T19:28:07.2599871Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9d16c170-260d-48fa-9c22-d7cd20cce419 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:28:07 addons-808548 crio[769]: time="2025-10-17T19:28:07.26142824Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dc6d7507-9c8b-42d8-ae43-b56cd370c5ac name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:28:07 addons-808548 crio[769]: time="2025-10-17T19:28:07.265766624Z" level=info msg="Creating container: default/busybox/busybox" id=a706afe0-6fdf-4543-abca-ff686efbcc17 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:28:07 addons-808548 crio[769]: time="2025-10-17T19:28:07.266452492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:28:07 addons-808548 crio[769]: time="2025-10-17T19:28:07.271807265Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:28:07 addons-808548 crio[769]: time="2025-10-17T19:28:07.272232634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:28:07 addons-808548 crio[769]: time="2025-10-17T19:28:07.305126537Z" level=info msg="Created container a8328a95cc8807e18148253f94167f85d078c59c9d37cd038ee08cd2bfa10798: default/busybox/busybox" id=a706afe0-6fdf-4543-abca-ff686efbcc17 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:28:07 addons-808548 crio[769]: time="2025-10-17T19:28:07.305899322Z" level=info msg="Starting container: a8328a95cc8807e18148253f94167f85d078c59c9d37cd038ee08cd2bfa10798" id=3bad5b99-8010-490c-ba7f-0c56eefd824c name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:28:07 addons-808548 crio[769]: time="2025-10-17T19:28:07.307939875Z" level=info msg="Started container" PID=6404 containerID=a8328a95cc8807e18148253f94167f85d078c59c9d37cd038ee08cd2bfa10798 description=default/busybox/busybox id=3bad5b99-8010-490c-ba7f-0c56eefd824c name=/runtime.v1.RuntimeService/StartContainer sandboxID=695751ac506909d3bdec06bdff1671eee559a0e5894b722ebcd7c438b9d509f2
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.38535362Z" level=info msg="Removing container: bdef9f94d6691b0a1c1bd0b753d3bf8a4348d41896e0f5ad42aab54d991176de" id=68cd9194-5611-4f6c-bba6-131d36d61319 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.393439164Z" level=info msg="Removed container bdef9f94d6691b0a1c1bd0b753d3bf8a4348d41896e0f5ad42aab54d991176de: gcp-auth/gcp-auth-certs-patch-pvd4s/patch" id=68cd9194-5611-4f6c-bba6-131d36d61319 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.395165237Z" level=info msg="Removing container: 1fb15864a08c56a0f1faba9c2255f59fafd779ab12b39a9841609e31325b295c" id=03b767f3-4eaa-4599-a48c-be21fec66c7d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.40276783Z" level=info msg="Removed container 1fb15864a08c56a0f1faba9c2255f59fafd779ab12b39a9841609e31325b295c: gcp-auth/gcp-auth-certs-create-wsgbc/create" id=03b767f3-4eaa-4599-a48c-be21fec66c7d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.405611445Z" level=info msg="Stopping pod sandbox: 46b79163c766c99d10de9188c46d8220b27daf0f2b86fdfa4607bd7d537dac70" id=41ccd14d-3752-466b-8168-79f7a8410868 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.405656598Z" level=info msg="Stopped pod sandbox (already stopped): 46b79163c766c99d10de9188c46d8220b27daf0f2b86fdfa4607bd7d537dac70" id=41ccd14d-3752-466b-8168-79f7a8410868 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.406082416Z" level=info msg="Removing pod sandbox: 46b79163c766c99d10de9188c46d8220b27daf0f2b86fdfa4607bd7d537dac70" id=ac74f8f2-2a3f-4af6-a603-87a5410f90a1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.409079815Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.409141471Z" level=info msg="Removed pod sandbox: 46b79163c766c99d10de9188c46d8220b27daf0f2b86fdfa4607bd7d537dac70" id=ac74f8f2-2a3f-4af6-a603-87a5410f90a1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.409634616Z" level=info msg="Stopping pod sandbox: 626948d515b9189214bc5c53723b2646101453a4f7a461eb08f346e3b1044887" id=37bdd8fc-5128-469e-8193-8da58e1c8073 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.409688328Z" level=info msg="Stopped pod sandbox (already stopped): 626948d515b9189214bc5c53723b2646101453a4f7a461eb08f346e3b1044887" id=37bdd8fc-5128-469e-8193-8da58e1c8073 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.410163008Z" level=info msg="Removing pod sandbox: 626948d515b9189214bc5c53723b2646101453a4f7a461eb08f346e3b1044887" id=31eab0c4-1857-406d-a326-b7805c140997 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.415895255Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:28:13 addons-808548 crio[769]: time="2025-10-17T19:28:13.415954767Z" level=info msg="Removed pod sandbox: 626948d515b9189214bc5c53723b2646101453a4f7a461eb08f346e3b1044887" id=31eab0c4-1857-406d-a326-b7805c140997 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	a8328a95cc880       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   695751ac50690       busybox                                     default
	53d269845a83e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          26 seconds ago       Running             csi-snapshotter                          0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	508d623947dcb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          27 seconds ago       Running             csi-provisioner                          0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	534e46164a73e       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            28 seconds ago       Running             liveness-probe                           0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	5579a2f9e5057       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           28 seconds ago       Running             hostpath                                 0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	4b012af7a50a8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 29 seconds ago       Running             gcp-auth                                 0                   d1beaa31ee332       gcp-auth-78565c9fb4-cnh4w                   gcp-auth
	57e22e20440d1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                31 seconds ago       Running             node-driver-registrar                    0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	c4e78204d42dc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            32 seconds ago       Running             gadget                                   0                   55093274c9c7b       gadget-qzzq2                                gadget
	3ddb7aa30a0d2       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             35 seconds ago       Running             controller                               0                   efeef1174b2fb       ingress-nginx-controller-675c5ddd98-bszbb   ingress-nginx
	9a21825a549c2       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              39 seconds ago       Running             registry-proxy                           0                   c784fb1dc3219       registry-proxy-5gbvf                        kube-system
	5d22bcde5dbdb       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              42 seconds ago       Running             csi-resizer                              0                   db6b3eb3c8b31       csi-hostpath-resizer-0                      kube-system
	9ca980090d556       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   43 seconds ago       Exited              patch                                    0                   00e3ff684c6df       ingress-nginx-admission-patch-56ccn         ingress-nginx
	56688cf87e4fa       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     43 seconds ago       Running             amd-gpu-device-plugin                    0                   6cb0e25b97918       amd-gpu-device-plugin-s9xrd                 kube-system
	59d6b1b073fe9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      44 seconds ago       Running             volume-snapshot-controller               0                   30cc544c47994       snapshot-controller-7d9fbc56b8-q75kr        kube-system
	2bb7b66e533ea       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              45 seconds ago       Running             yakd                                     0                   69b0e16d66811       yakd-dashboard-5ff678cb9-qgnw5              yakd-dashboard
	71af4816f74d2       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             48 seconds ago       Running             csi-attacher                             0                   45ff84ab750b4       csi-hostpath-attacher-0                     kube-system
	8ad2b4d2b3966       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   49 seconds ago       Running             csi-external-health-monitor-controller   0                   5280b98753dac       csi-hostpathplugin-srnfw                    kube-system
	e01b7f799459f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      50 seconds ago       Running             volume-snapshot-controller               0                   5eca2f15a7de6       snapshot-controller-7d9fbc56b8-qpq25        kube-system
	3eadefea7b82f       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     51 seconds ago       Running             nvidia-device-plugin-ctr                 0                   8d64acb3a014a       nvidia-device-plugin-daemonset-qh9hh        kube-system
	b37b72284c040       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               59 seconds ago       Running             cloud-spanner-emulator                   0                   635b739ab5d5f       cloud-spanner-emulator-86bd5cbb97-kt7zs     default
	fc2ba59434a35       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   4bd2f18c25d38       metrics-server-85b7d694d7-q44mn             kube-system
	199827710f7e2       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   41c07e0474bdc       kube-ingress-dns-minikube                   kube-system
	9978c81effa86       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   3c0fba2a66b65       local-path-provisioner-648f6765c9-29skz     local-path-storage
	91d90369c0267       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   96d22435ddd3c       ingress-nginx-admission-create-8h4tr        ingress-nginx
	5e0188d0e59ac       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   c4f7fbd236eac       registry-6b586f9694-ns7g9                   kube-system
	89b97e1cc3fdc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   f72978ef277e2       coredns-66bc5c9577-q7x6k                    kube-system
	00564264eaf2d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   74120e4daa8a3       storage-provisioner                         kube-system
	509b950592a64       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   39738ae66b812       kube-proxy-ck6l7                            kube-system
	c0f115c889023       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   08bb700090292       kindnet-lwg6r                               kube-system
	9486051a8e6db       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   ebc8ebe62e244       kube-scheduler-addons-808548                kube-system
	d471f8a340bfa       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   8d4504ba25314       kube-controller-manager-addons-808548       kube-system
	fed27e3c8e0a5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   73422ce7de852       etcd-addons-808548                          kube-system
	d41c518959459       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   80603ba658491       kube-apiserver-addons-808548                kube-system
	
	
	==> coredns [89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72] <==
	[INFO] 10.244.0.17:38419 - 30767 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003551018s
	[INFO] 10.244.0.17:54609 - 24768 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000104222s
	[INFO] 10.244.0.17:54609 - 24378 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000142061s
	[INFO] 10.244.0.17:48314 - 64271 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000077151s
	[INFO] 10.244.0.17:48314 - 64587 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000147449s
	[INFO] 10.244.0.17:51989 - 52162 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000083122s
	[INFO] 10.244.0.17:51989 - 51845 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000131544s
	[INFO] 10.244.0.17:48642 - 28773 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000134237s
	[INFO] 10.244.0.17:48642 - 28555 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156411s
	[INFO] 10.244.0.22:47836 - 63431 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216544s
	[INFO] 10.244.0.22:42515 - 52478 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000299222s
	[INFO] 10.244.0.22:48527 - 46353 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000189426s
	[INFO] 10.244.0.22:52696 - 40679 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000234297s
	[INFO] 10.244.0.22:39544 - 13114 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000157033s
	[INFO] 10.244.0.22:42038 - 21341 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000203812s
	[INFO] 10.244.0.22:56694 - 23100 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003680483s
	[INFO] 10.244.0.22:35890 - 34158 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003758883s
	[INFO] 10.244.0.22:41903 - 58945 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004936035s
	[INFO] 10.244.0.22:47249 - 12512 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004995461s
	[INFO] 10.244.0.22:44285 - 63911 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005138946s
	[INFO] 10.244.0.22:58333 - 24755 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005401496s
	[INFO] 10.244.0.22:43046 - 10475 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004828828s
	[INFO] 10.244.0.22:43047 - 9430 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004987084s
	[INFO] 10.244.0.22:40310 - 4309 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001229734s
	[INFO] 10.244.0.22:52842 - 24289 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001347579s
	
	
	==> describe nodes <==
	Name:               addons-808548
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-808548
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=addons-808548
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_26_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-808548
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-808548"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:26:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-808548
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:28:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:27:45 +0000   Fri, 17 Oct 2025 19:26:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:27:45 +0000   Fri, 17 Oct 2025 19:26:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:27:45 +0000   Fri, 17 Oct 2025 19:26:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:27:45 +0000   Fri, 17 Oct 2025 19:26:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-808548
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2a535284-69f6-4c0d-b477-eb46f22a04f4
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-86bd5cbb97-kt7zs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  gadget                      gadget-qzzq2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  gcp-auth                    gcp-auth-78565c9fb4-cnh4w                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-bszbb    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         115s
	  kube-system                 amd-gpu-device-plugin-s9xrd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 coredns-66bc5c9577-q7x6k                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     117s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 csi-hostpathplugin-srnfw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 etcd-addons-808548                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-lwg6r                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-addons-808548                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-addons-808548        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-ck6l7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-addons-808548                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 metrics-server-85b7d694d7-q44mn              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         116s
	  kube-system                 nvidia-device-plugin-daemonset-qh9hh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 registry-6b586f9694-ns7g9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 registry-creds-764b6fb674-d7p4h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 registry-proxy-5gbvf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 snapshot-controller-7d9fbc56b8-q75kr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 snapshot-controller-7d9fbc56b8-qpq25         0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  local-path-storage          local-path-provisioner-648f6765c9-29skz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-qgnw5               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     115s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 115s  kube-proxy       
	  Normal  Starting                 2m2s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s  kubelet          Node addons-808548 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s  kubelet          Node addons-808548 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s  kubelet          Node addons-808548 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           118s  node-controller  Node addons-808548 event: Registered Node addons-808548 in Controller
	  Normal  NodeReady                76s   kubelet          Node addons-808548 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 3b aa d9 f7 47 08 06
	[ +31.640977] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 82 e3 e2 d1 a0 ca 08 06
	[  +0.974315] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 64 4b 08 1b f1 08 06
	[  +0.037680] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 c6 c3 0b df b0 08 06
	[  +6.698602] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 6e 81 a6 8c 10 08 06
	[Oct17 19:15] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 39 37 e2 2e df 08 06
	[  +1.021941] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 58 09 4d fd 5d 08 06
	[  +0.027631] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 79 a9 da 64 32 08 06
	[  +6.719503] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 f7 f7 43 d3 43 08 06
	[Oct17 19:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a4 1e 3d 6e 9b 08 06
	[  +0.964252] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 77 4f 49 eb 8d 08 06
	[  +0.057147] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	
	
	==> etcd [fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c] <==
	{"level":"warn","ts":"2025-10-17T19:26:10.134291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.142772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.150417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.158206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.165721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.172487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.180713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.189666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.199333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.216555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.223272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.229927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:10.283628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:21.289617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:21.296477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:47.723298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:47.738552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:26:47.745084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38984","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:27:18.764411Z","caller":"traceutil/trace.go:172","msg":"trace[1794263440] transaction","detail":"{read_only:false; response_revision:1034; number_of_response:1; }","duration":"108.930138ms","start":"2025-10-17T19:27:18.655453Z","end":"2025-10-17T19:27:18.764383Z","steps":["trace[1794263440] 'process raft request'  (duration: 108.688647ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:27:24.486551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.929495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:27:24.486631Z","caller":"traceutil/trace.go:172","msg":"trace[1243683834] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1053; }","duration":"150.057781ms","start":"2025-10-17T19:27:24.336558Z","end":"2025-10-17T19:27:24.486616Z","steps":["trace[1243683834] 'range keys from in-memory index tree'  (duration: 149.84802ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:27:24.678934Z","caller":"traceutil/trace.go:172","msg":"trace[1188612772] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"123.923087ms","start":"2025-10-17T19:27:24.554988Z","end":"2025-10-17T19:27:24.678911Z","steps":["trace[1188612772] 'process raft request'  (duration: 123.804357ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:27:41.945236Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.285017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:27:41.945312Z","caller":"traceutil/trace.go:172","msg":"trace[825653420] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1153; }","duration":"110.378418ms","start":"2025-10-17T19:27:41.834915Z","end":"2025-10-17T19:27:41.945294Z","steps":["trace[825653420] 'agreement among raft nodes before linearized reading'  (duration: 32.147742ms)","trace[825653420] 'range keys from in-memory index tree'  (duration: 78.108542ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:27:41.945485Z","caller":"traceutil/trace.go:172","msg":"trace[1733746361] transaction","detail":"{read_only:false; response_revision:1154; number_of_response:1; }","duration":"156.084191ms","start":"2025-10-17T19:27:41.789379Z","end":"2025-10-17T19:27:41.945463Z","steps":["trace[1733746361] 'process raft request'  (duration: 77.735531ms)","trace[1733746361] 'compare'  (duration: 78.15599ms)"],"step_count":2}
	
	
	==> gcp-auth [4b012af7a50a8d7a7a201239960e59b040263968ffd8451a94537a131b8dbf3a] <==
	2025/10/17 19:27:45 GCP Auth Webhook started!
	2025/10/17 19:28:04 Ready to marshal response ...
	2025/10/17 19:28:04 Ready to write response ...
	2025/10/17 19:28:04 Ready to marshal response ...
	2025/10/17 19:28:04 Ready to write response ...
	2025/10/17 19:28:05 Ready to marshal response ...
	2025/10/17 19:28:05 Ready to write response ...
	
	
	==> kernel <==
	 19:28:15 up  1:10,  0 user,  load average: 1.88, 1.96, 1.62
	Linux addons-808548 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb] <==
	I1017 19:26:19.527431       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:26:19.527875       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 19:26:49.527657       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 19:26:49.527801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 19:26:49.528866       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 19:26:49.530046       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1017 19:26:50.727823       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:26:50.727848       1 metrics.go:72] Registering metrics
	I1017 19:26:50.727906       1 controller.go:711] "Syncing nftables rules"
	I1017 19:26:59.527606       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:26:59.527650       1 main.go:301] handling current node
	I1017 19:27:09.528863       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:27:09.528910       1 main.go:301] handling current node
	I1017 19:27:19.527004       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:27:19.527052       1 main.go:301] handling current node
	I1017 19:27:29.527957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:27:29.527993       1 main.go:301] handling current node
	I1017 19:27:39.528826       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:27:39.528872       1 main.go:301] handling current node
	I1017 19:27:49.527538       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:27:49.527604       1 main.go:301] handling current node
	I1017 19:27:59.531846       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:27:59.531883       1 main.go:301] handling current node
	I1017 19:28:09.526872       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:28:09.526906       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14] <==
	W1017 19:27:13.604100       1 handler_proxy.go:99] no RequestInfo found in the context
	E1017 19:27:13.604167       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1017 19:27:13.604444       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.32.144:443: connect: connection refused" logger="UnhandledError"
	E1017 19:27:13.610179       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.32.144:443: connect: connection refused" logger="UnhandledError"
	W1017 19:27:14.606715       1 handler_proxy.go:99] no RequestInfo found in the context
	W1017 19:27:14.606715       1 handler_proxy.go:99] no RequestInfo found in the context
	E1017 19:27:14.606774       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1017 19:27:14.606794       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1017 19:27:14.606836       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1017 19:27:14.607970       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1017 19:27:18.692658       1 handler_proxy.go:99] no RequestInfo found in the context
	E1017 19:27:18.692659       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.32.144:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1017 19:27:18.692711       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1017 19:27:18.706917       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1017 19:28:13.137171       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58482: use of closed network connection
	E1017 19:28:13.291990       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58510: use of closed network connection
	
	
	==> kube-controller-manager [d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561] <==
	I1017 19:26:17.698696       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 19:26:17.698729       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 19:26:17.699130       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:26:17.699159       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 19:26:17.699377       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 19:26:17.699432       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:26:17.699446       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:26:17.699659       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:26:17.700833       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:26:17.702791       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 19:26:17.703452       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:17.706523       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:17.709833       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 19:26:17.719143       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1017 19:26:19.899301       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1017 19:26:47.710678       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1017 19:26:47.710869       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1017 19:26:47.710905       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1017 19:26:47.729644       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1017 19:26:47.733008       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1017 19:26:47.811440       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:47.833803       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:27:02.703974       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1017 19:27:17.817415       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1017 19:27:17.843488       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0] <==
	I1017 19:26:19.172277       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:26:19.524891       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:26:19.625842       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:26:19.625883       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:26:19.625995       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:26:19.754242       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:26:19.754375       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:26:19.777311       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:26:19.777826       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:26:19.778283       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:26:19.780327       1 config.go:200] "Starting service config controller"
	I1017 19:26:19.780389       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:26:19.780433       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:26:19.780458       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:26:19.780515       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:26:19.780541       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:26:19.781197       1 config.go:309] "Starting node config controller"
	I1017 19:26:19.781253       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:26:19.882652       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:26:19.883251       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:26:19.883357       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:26:19.883397       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a] <==
	E1017 19:26:11.015145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:26:11.015086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:26:11.015182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:26:11.015449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:26:11.015581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:26:11.015606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:26:11.015714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:26:11.015717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:26:11.015762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:26:11.014951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:26:11.015798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:26:11.015802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:26:11.015818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:26:11.015921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:26:11.015994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:26:11.016115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:26:11.016115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:26:11.856512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:26:11.861684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:26:11.901872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:26:11.905921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:26:11.946287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:26:11.996839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:26:11.996903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1017 19:26:14.613381       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:27:32 addons-808548 kubelet[1279]: I1017 19:27:32.669555    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-s9xrd" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:27:32 addons-808548 kubelet[1279]: I1017 19:27:32.681002    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-s9xrd" podStartSLOduration=2.184953884 podStartE2EDuration="33.680980705s" podCreationTimestamp="2025-10-17 19:26:59 +0000 UTC" firstStartedPulling="2025-10-17 19:27:00.090655991 +0000 UTC m=+46.785838951" lastFinishedPulling="2025-10-17 19:27:31.586682827 +0000 UTC m=+78.281865772" observedRunningTime="2025-10-17 19:27:31.677877253 +0000 UTC m=+78.373060233" watchObservedRunningTime="2025-10-17 19:27:32.680980705 +0000 UTC m=+79.376163668"
	Oct 17 19:27:33 addons-808548 kubelet[1279]: I1017 19:27:33.685426    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-resizer-0" podStartSLOduration=41.180812018 podStartE2EDuration="1m13.685408327s" podCreationTimestamp="2025-10-17 19:26:20 +0000 UTC" firstStartedPulling="2025-10-17 19:27:00.098861808 +0000 UTC m=+46.794044756" lastFinishedPulling="2025-10-17 19:27:32.60345812 +0000 UTC m=+79.298641065" observedRunningTime="2025-10-17 19:27:33.684645248 +0000 UTC m=+80.379828212" watchObservedRunningTime="2025-10-17 19:27:33.685408327 +0000 UTC m=+80.380591288"
	Oct 17 19:27:33 addons-808548 kubelet[1279]: I1017 19:27:33.745050    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6sj4\" (UniqueName: \"kubernetes.io/projected/171478cd-4d36-411e-a330-fb9e3a90646b-kube-api-access-q6sj4\") pod \"171478cd-4d36-411e-a330-fb9e3a90646b\" (UID: \"171478cd-4d36-411e-a330-fb9e3a90646b\") "
	Oct 17 19:27:33 addons-808548 kubelet[1279]: I1017 19:27:33.747552    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/171478cd-4d36-411e-a330-fb9e3a90646b-kube-api-access-q6sj4" (OuterVolumeSpecName: "kube-api-access-q6sj4") pod "171478cd-4d36-411e-a330-fb9e3a90646b" (UID: "171478cd-4d36-411e-a330-fb9e3a90646b"). InnerVolumeSpecName "kube-api-access-q6sj4". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 17 19:27:33 addons-808548 kubelet[1279]: I1017 19:27:33.845585    1279 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q6sj4\" (UniqueName: \"kubernetes.io/projected/171478cd-4d36-411e-a330-fb9e3a90646b-kube-api-access-q6sj4\") on node \"addons-808548\" DevicePath \"\""
	Oct 17 19:27:34 addons-808548 kubelet[1279]: I1017 19:27:34.679880    1279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00e3ff684c6df2f3da641ce7a9e38fff818562e12fd12e036a091f29aaa4268a"
	Oct 17 19:27:35 addons-808548 kubelet[1279]: I1017 19:27:35.686223    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5gbvf" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:27:36 addons-808548 kubelet[1279]: I1017 19:27:36.689670    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5gbvf" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:27:39 addons-808548 kubelet[1279]: I1017 19:27:39.713367    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-bszbb" podStartSLOduration=51.839369907 podStartE2EDuration="1m19.713345948s" podCreationTimestamp="2025-10-17 19:26:20 +0000 UTC" firstStartedPulling="2025-10-17 19:27:11.112958828 +0000 UTC m=+57.808141785" lastFinishedPulling="2025-10-17 19:27:38.98693488 +0000 UTC m=+85.682117826" observedRunningTime="2025-10-17 19:27:39.712447546 +0000 UTC m=+86.407630509" watchObservedRunningTime="2025-10-17 19:27:39.713345948 +0000 UTC m=+86.408528911"
	Oct 17 19:27:39 addons-808548 kubelet[1279]: I1017 19:27:39.713518    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-5gbvf" podStartSLOduration=5.413397846 podStartE2EDuration="40.713508286s" podCreationTimestamp="2025-10-17 19:26:59 +0000 UTC" firstStartedPulling="2025-10-17 19:27:00.117033774 +0000 UTC m=+46.812216731" lastFinishedPulling="2025-10-17 19:27:35.417144219 +0000 UTC m=+82.112327171" observedRunningTime="2025-10-17 19:27:35.701056719 +0000 UTC m=+82.396239682" watchObservedRunningTime="2025-10-17 19:27:39.713508286 +0000 UTC m=+86.408691249"
	Oct 17 19:27:42 addons-808548 kubelet[1279]: I1017 19:27:42.732729    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-qzzq2" podStartSLOduration=65.925489799 podStartE2EDuration="1m23.732704235s" podCreationTimestamp="2025-10-17 19:26:19 +0000 UTC" firstStartedPulling="2025-10-17 19:27:24.552933835 +0000 UTC m=+71.248116780" lastFinishedPulling="2025-10-17 19:27:42.360148273 +0000 UTC m=+89.055331216" observedRunningTime="2025-10-17 19:27:42.732039154 +0000 UTC m=+89.427222116" watchObservedRunningTime="2025-10-17 19:27:42.732704235 +0000 UTC m=+89.427887198"
	Oct 17 19:27:47 addons-808548 kubelet[1279]: I1017 19:27:47.458065    1279 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 17 19:27:47 addons-808548 kubelet[1279]: I1017 19:27:47.458121    1279 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 17 19:27:48 addons-808548 kubelet[1279]: I1017 19:27:48.033523    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-cnh4w" podStartSLOduration=68.574648841 podStartE2EDuration="1m22.033497331s" podCreationTimestamp="2025-10-17 19:26:26 +0000 UTC" firstStartedPulling="2025-10-17 19:27:31.818841817 +0000 UTC m=+78.514024761" lastFinishedPulling="2025-10-17 19:27:45.277690293 +0000 UTC m=+91.972873251" observedRunningTime="2025-10-17 19:27:45.747887054 +0000 UTC m=+92.443070017" watchObservedRunningTime="2025-10-17 19:27:48.033497331 +0000 UTC m=+94.728680291"
	Oct 17 19:27:48 addons-808548 kubelet[1279]: I1017 19:27:48.771316    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-srnfw" podStartSLOduration=1.353062435 podStartE2EDuration="49.771281614s" podCreationTimestamp="2025-10-17 19:26:59 +0000 UTC" firstStartedPulling="2025-10-17 19:27:00.080040423 +0000 UTC m=+46.775223378" lastFinishedPulling="2025-10-17 19:27:48.498259613 +0000 UTC m=+95.193442557" observedRunningTime="2025-10-17 19:27:48.769633685 +0000 UTC m=+95.464816648" watchObservedRunningTime="2025-10-17 19:27:48.771281614 +0000 UTC m=+95.466464578"
	Oct 17 19:27:49 addons-808548 kubelet[1279]: I1017 19:27:49.390088    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae37fb3a-d139-4144-9c78-66cd0377d24b" path="/var/lib/kubelet/pods/ae37fb3a-d139-4144-9c78-66cd0377d24b/volumes"
	Oct 17 19:27:49 addons-808548 kubelet[1279]: I1017 19:27:49.390526    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc186093-5b9e-4cf6-b7c1-0a7cb91d5802" path="/var/lib/kubelet/pods/fc186093-5b9e-4cf6-b7c1-0a7cb91d5802/volumes"
	Oct 17 19:28:03 addons-808548 kubelet[1279]: E1017 19:28:03.588331    1279 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 17 19:28:03 addons-808548 kubelet[1279]: E1017 19:28:03.588425    1279 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/325a60a7-5f62-4ab1-9199-ac88319f2912-gcr-creds podName:325a60a7-5f62-4ab1-9199-ac88319f2912 nodeName:}" failed. No retries permitted until 2025-10-17 19:29:07.588403624 +0000 UTC m=+174.283586580 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/325a60a7-5f62-4ab1-9199-ac88319f2912-gcr-creds") pod "registry-creds-764b6fb674-d7p4h" (UID: "325a60a7-5f62-4ab1-9199-ac88319f2912") : secret "registry-creds-gcr" not found
	Oct 17 19:28:05 addons-808548 kubelet[1279]: I1017 19:28:05.099394    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sgk8\" (UniqueName: \"kubernetes.io/projected/0103b3dd-566b-45eb-803e-5794db655669-kube-api-access-2sgk8\") pod \"busybox\" (UID: \"0103b3dd-566b-45eb-803e-5794db655669\") " pod="default/busybox"
	Oct 17 19:28:05 addons-808548 kubelet[1279]: I1017 19:28:05.099449    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0103b3dd-566b-45eb-803e-5794db655669-gcp-creds\") pod \"busybox\" (UID: \"0103b3dd-566b-45eb-803e-5794db655669\") " pod="default/busybox"
	Oct 17 19:28:07 addons-808548 kubelet[1279]: I1017 19:28:07.849852    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.887753773 podStartE2EDuration="3.849812245s" podCreationTimestamp="2025-10-17 19:28:04 +0000 UTC" firstStartedPulling="2025-10-17 19:28:05.298771883 +0000 UTC m=+111.993954838" lastFinishedPulling="2025-10-17 19:28:07.260830353 +0000 UTC m=+113.956013310" observedRunningTime="2025-10-17 19:28:07.849103477 +0000 UTC m=+114.544286441" watchObservedRunningTime="2025-10-17 19:28:07.849812245 +0000 UTC m=+114.544995208"
	Oct 17 19:28:13 addons-808548 kubelet[1279]: I1017 19:28:13.383685    1279 scope.go:117] "RemoveContainer" containerID="bdef9f94d6691b0a1c1bd0b753d3bf8a4348d41896e0f5ad42aab54d991176de"
	Oct 17 19:28:13 addons-808548 kubelet[1279]: I1017 19:28:13.393888    1279 scope.go:117] "RemoveContainer" containerID="1fb15864a08c56a0f1faba9c2255f59fafd779ab12b39a9841609e31325b295c"
	
	
	==> storage-provisioner [00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32] <==
	W1017 19:27:50.424522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:27:52.427945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:27:52.434710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:27:54.437611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:27:54.441808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:27:56.445550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:27:56.449875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:27:58.453101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:27:58.459203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:00.462838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:00.466939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:02.470210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:02.477015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:04.479867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:04.485848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:06.488887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:06.493152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:08.496939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:08.501445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:10.505401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:10.509340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:12.512673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:12.518908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:14.522455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:14.526976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-808548 -n addons-808548
helpers_test.go:269: (dbg) Run:  kubectl --context addons-808548 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-8h4tr ingress-nginx-admission-patch-56ccn registry-creds-764b6fb674-d7p4h
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-808548 describe pod ingress-nginx-admission-create-8h4tr ingress-nginx-admission-patch-56ccn registry-creds-764b6fb674-d7p4h
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-808548 describe pod ingress-nginx-admission-create-8h4tr ingress-nginx-admission-patch-56ccn registry-creds-764b6fb674-d7p4h: exit status 1 (61.457735ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8h4tr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-56ccn" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-d7p4h" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-808548 describe pod ingress-nginx-admission-create-8h4tr ingress-nginx-admission-patch-56ccn registry-creds-764b6fb674-d7p4h: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable headlamp --alsologtostderr -v=1: exit status 11 (237.738105ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:15.986396  149759 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:15.986691  149759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:15.986702  149759 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:15.986708  149759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:15.986921  149759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:15.987223  149759 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:15.987593  149759 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:15.987612  149759 addons.go:606] checking whether the cluster is paused
	I1017 19:28:15.987723  149759 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:15.987756  149759 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:15.988156  149759 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:16.007542  149759 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:16.007597  149759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:16.026791  149759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:16.124585  149759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:16.124690  149759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:16.155325  149759 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:16.155346  149759 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:16.155350  149759 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:16.155353  149759 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:16.155356  149759 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:16.155358  149759 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:16.155361  149759 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:16.155363  149759 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:16.155365  149759 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:16.155371  149759 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:16.155374  149759 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:16.155376  149759 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:16.155379  149759 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:16.155381  149759 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:16.155384  149759 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:16.155392  149759 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:16.155397  149759 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:16.155401  149759 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:16.155404  149759 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:16.155406  149759 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:16.155409  149759 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:16.155411  149759 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:16.155413  149759 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:16.155416  149759 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:16.155418  149759 cri.go:89] found id: ""
	I1017 19:28:16.155455  149759 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:16.170037  149759 out.go:203] 
	W1017 19:28:16.171531  149759 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:16.171553  149759 out.go:285] * 
	* 
	W1017 19:28:16.174520  149759 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:16.176035  149759 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-kt7zs" [af2b386e-9293-47ea-84d7-0b71b7ea0247] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003456206s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (285.753132ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:31.735790  151161 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:31.736131  151161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:31.736143  151161 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:31.736150  151161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:31.736398  151161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:31.736734  151161 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:31.737235  151161 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:31.737260  151161 addons.go:606] checking whether the cluster is paused
	I1017 19:28:31.737392  151161 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:31.737408  151161 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:31.737887  151161 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:31.761293  151161 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:31.761369  151161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:31.784265  151161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:31.891077  151161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:31.891164  151161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:31.930533  151161 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:31.930558  151161 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:31.930565  151161 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:31.930570  151161 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:31.930575  151161 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:31.930579  151161 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:31.930596  151161 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:31.930601  151161 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:31.930605  151161 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:31.930612  151161 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:31.930617  151161 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:31.930621  151161 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:31.930625  151161 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:31.930630  151161 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:31.930633  151161 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:31.930644  151161 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:31.930653  151161 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:31.930657  151161 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:31.930661  151161 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:31.930665  151161 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:31.930669  151161 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:31.930673  151161 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:31.930678  151161 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:31.930684  151161 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:31.930688  151161 cri.go:89] found id: ""
	I1017 19:28:31.930735  151161 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:31.950679  151161 out.go:203] 
	W1017 19:28:31.952372  151161 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:31.952404  151161 out.go:285] * 
	* 
	W1017 19:28:31.956652  151161 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:31.958485  151161 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-808548 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-808548 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-808548 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [ca6078ff-8c66-487c-88ef-c721defe3032] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [ca6078ff-8c66-487c-88ef-c721defe3032] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [ca6078ff-8c66-487c-88ef-c721defe3032] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.002967864s
addons_test.go:967: (dbg) Run:  kubectl --context addons-808548 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 ssh "cat /opt/local-path-provisioner/pvc-c3b2c4b5-817c-4b9b-a34b-00566c5e90d3_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-808548 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-808548 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (248.370748ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:39.055956  152205 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:39.056246  152205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:39.056257  152205 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:39.056261  152205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:39.056473  152205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:39.056778  152205 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:39.057128  152205 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:39.057144  152205 addons.go:606] checking whether the cluster is paused
	I1017 19:28:39.057225  152205 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:39.057237  152205 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:39.057643  152205 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:39.077170  152205 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:39.077261  152205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:39.096685  152205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:39.197064  152205 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:39.197172  152205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:39.228381  152205 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:39.228409  152205 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:39.228417  152205 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:39.228423  152205 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:39.228429  152205 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:39.228435  152205 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:39.228441  152205 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:39.228446  152205 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:39.228450  152205 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:39.228459  152205 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:39.228470  152205 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:39.228475  152205 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:39.228483  152205 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:39.228488  152205 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:39.228495  152205 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:39.228499  152205 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:39.228502  152205 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:39.228507  152205 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:39.228509  152205 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:39.228512  152205 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:39.228531  152205 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:39.228540  152205 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:39.228546  152205 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:39.228551  152205 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:39.228560  152205 cri.go:89] found id: ""
	I1017 19:28:39.228611  152205 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:39.243706  152205 out.go:203] 
	W1017 19:28:39.245240  152205 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:39.245268  152205 out.go:285] * 
	* 
	W1017 19:28:39.248991  152205 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:39.251126  152205 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (15.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qh9hh" [5874d0fa-f0c2-4888-8ea5-7dda59b9164e] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003533592s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (238.408607ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:26.475981  150347 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:26.476130  150347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:26.476139  150347 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:26.476143  150347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:26.476323  150347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:26.476574  150347 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:26.476918  150347 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:26.476931  150347 addons.go:606] checking whether the cluster is paused
	I1017 19:28:26.477004  150347 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:26.477015  150347 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:26.477391  150347 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:26.495928  150347 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:26.496038  150347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:26.513794  150347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:26.610821  150347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:26.610905  150347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:26.642113  150347 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:26.642137  150347 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:26.642141  150347 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:26.642144  150347 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:26.642147  150347 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:26.642150  150347 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:26.642152  150347 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:26.642155  150347 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:26.642157  150347 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:26.642162  150347 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:26.642165  150347 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:26.642167  150347 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:26.642170  150347 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:26.642172  150347 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:26.642174  150347 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:26.642181  150347 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:26.642184  150347 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:26.642188  150347 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:26.642191  150347 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:26.642193  150347 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:26.642200  150347 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:26.642203  150347 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:26.642205  150347 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:26.642207  150347 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:26.642210  150347 cri.go:89] found id: ""
	I1017 19:28:26.642246  150347 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:26.657548  150347 out.go:203] 
	W1017 19:28:26.659176  150347 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:26.659205  150347 out.go:285] * 
	* 
	W1017 19:28:26.662253  150347 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:26.664022  150347 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-qgnw5" [1802f5a5-4783-4768-a5d3-743a93371550] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003284643s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable yakd --alsologtostderr -v=1: exit status 11 (242.154822ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:23.904315  150105 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:23.904619  150105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:23.904631  150105 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:23.904638  150105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:23.904931  150105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:23.905246  150105 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:23.905649  150105 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:23.905668  150105 addons.go:606] checking whether the cluster is paused
	I1017 19:28:23.905809  150105 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:23.905826  150105 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:23.906204  150105 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:23.925051  150105 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:23.925128  150105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:23.943634  150105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:24.039575  150105 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:24.039659  150105 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:24.071009  150105 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:24.071046  150105 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:24.071050  150105 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:24.071054  150105 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:24.071056  150105 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:24.071060  150105 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:24.071062  150105 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:24.071065  150105 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:24.071067  150105 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:24.071077  150105 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:24.071085  150105 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:24.071087  150105 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:24.071090  150105 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:24.071093  150105 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:24.071095  150105 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:24.071102  150105 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:24.071104  150105 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:24.071113  150105 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:24.071116  150105 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:24.071118  150105 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:24.071120  150105 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:24.071123  150105 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:24.071125  150105 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:24.071127  150105 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:24.071129  150105 cri.go:89] found id: ""
	I1017 19:28:24.071176  150105 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:24.087684  150105 out.go:203] 
	W1017 19:28:24.090095  150105 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:24.090136  150105 out.go:285] * 
	* 
	W1017 19:28:24.093491  150105 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:24.095350  150105 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-s9xrd" [b9ac4437-8f9f-4841-8858-358c218c25d2] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003567882s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-808548 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-808548 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (240.300241ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:28:21.231596  149992 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:28:21.231933  149992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:21.231943  149992 out.go:374] Setting ErrFile to fd 2...
	I1017 19:28:21.231947  149992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:28:21.232182  149992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:28:21.232438  149992 mustload.go:65] Loading cluster: addons-808548
	I1017 19:28:21.232820  149992 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:21.232835  149992 addons.go:606] checking whether the cluster is paused
	I1017 19:28:21.232916  149992 config.go:182] Loaded profile config "addons-808548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:28:21.232929  149992 host.go:66] Checking if "addons-808548" exists ...
	I1017 19:28:21.233437  149992 cli_runner.go:164] Run: docker container inspect addons-808548 --format={{.State.Status}}
	I1017 19:28:21.252537  149992 ssh_runner.go:195] Run: systemctl --version
	I1017 19:28:21.252592  149992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-808548
	I1017 19:28:21.271115  149992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32889 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/addons-808548/id_rsa Username:docker}
	I1017 19:28:21.367701  149992 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:28:21.367814  149992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:28:21.398620  149992 cri.go:89] found id: "53d269845a83e0b0eeb72bba4a81dd35762f03f008a88b4b40572369579ef9bc"
	I1017 19:28:21.398651  149992 cri.go:89] found id: "508d623947dcb086788b3685c5b6294074ef57c05dd67f31d6f91c65af7c55bf"
	I1017 19:28:21.398656  149992 cri.go:89] found id: "534e46164a73e468629e2b0450303955baa99f6a82a6ea3964979247ebeda1e9"
	I1017 19:28:21.398661  149992 cri.go:89] found id: "5579a2f9e5057c4936f7925d6429f48e97c80eae94f67b23acec185afea3ec8e"
	I1017 19:28:21.398666  149992 cri.go:89] found id: "57e22e20440d18f7b1df42f72dfe27fd5506a997445e731911378c0273b9900d"
	I1017 19:28:21.398671  149992 cri.go:89] found id: "9a21825a549c2bc88edba61fde176b35613d551d70aaa977b237caf19980e02c"
	I1017 19:28:21.398676  149992 cri.go:89] found id: "5d22bcde5dbdbc2459794d89f7ec6a2f83218b111b5f6e9cf17a35bf973a1c01"
	I1017 19:28:21.398681  149992 cri.go:89] found id: "56688cf87e4fa0f56843e7d5b3a2d50cb8c799fa9a5a1b82d22605e1cc01d3a9"
	I1017 19:28:21.398685  149992 cri.go:89] found id: "59d6b1b073fe95a8318bb8e4794d846882644156cbaf6554403ce1473424e5f8"
	I1017 19:28:21.398705  149992 cri.go:89] found id: "71af4816f74d24a943fd8f9571dd90112dd7e287cb24a3d6d00a17303031ed93"
	I1017 19:28:21.398712  149992 cri.go:89] found id: "8ad2b4d2b3966a077e65676d5a0b54c9f7cb123d2e630061873af3a2fd394715"
	I1017 19:28:21.398715  149992 cri.go:89] found id: "e01b7f799459f362e1615d2874e789de96b55dea2be9f7bd151885412f79e27c"
	I1017 19:28:21.398717  149992 cri.go:89] found id: "3eadefea7b82f5116cedbc399638c5074600170540b74d139653eec5ae9ac271"
	I1017 19:28:21.398720  149992 cri.go:89] found id: "fc2ba59434a3555a915601771705d8b57ab5a1e081166b2cc809481a6e7685d1"
	I1017 19:28:21.398723  149992 cri.go:89] found id: "199827710f7e227d5b78d24efe4fc66db6c38bbd98c4763db59557c5ff3aa55f"
	I1017 19:28:21.398733  149992 cri.go:89] found id: "5e0188d0e59acbba6130dcae3ed29a07c0a86411fab7119ebdea23fd55f650d8"
	I1017 19:28:21.398772  149992 cri.go:89] found id: "89b97e1cc3fdc4e80fe5b5c0a17a6b5655f6fb31176502dd7482f7ab06e88c72"
	I1017 19:28:21.398779  149992 cri.go:89] found id: "00564264eaf2dd0f8c808895327890cc3a9207c71c75f36572215028c4d7be32"
	I1017 19:28:21.398783  149992 cri.go:89] found id: "509b950592a64e85a2da67a94ff5de8942f35cb944dead64039b493cf71b0de0"
	I1017 19:28:21.398787  149992 cri.go:89] found id: "c0f115c889023b664cf2c31a26dd8104e69d004862e06fb35ef6671682c384fb"
	I1017 19:28:21.398793  149992 cri.go:89] found id: "9486051a8e6db23ff4da74906d638edbe16c2a0fde99b02b3c43a98eeff8699a"
	I1017 19:28:21.398797  149992 cri.go:89] found id: "d471f8a340bfabc4c081c062bd860bdd75afaac6c0b930db62fb9a387b80c561"
	I1017 19:28:21.398804  149992 cri.go:89] found id: "fed27e3c8e0a54bd51457df6b682717d83a863b7efb511b9a59c5a6344711c9c"
	I1017 19:28:21.398808  149992 cri.go:89] found id: "d41c518959459a2dfd2ba4afe136d439a94e8bcb688c78d8b894e062e7d14d14"
	I1017 19:28:21.398814  149992 cri.go:89] found id: ""
	I1017 19:28:21.398870  149992 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:28:21.413921  149992 out.go:203] 
	W1017 19:28:21.415850  149992 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:28:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:28:21.415878  149992 out.go:285] * 
	* 
	W1017 19:28:21.418968  149992 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:28:21.420839  149992 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-808548 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-558322 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-558322 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-qg7l8" [b595efd3-0d05-47f6-ab7a-c8779e7b27a7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-558322 -n functional-558322
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-17 19:44:50.372748684 +0000 UTC m=+1168.925705778
functional_test.go:1645: (dbg) Run:  kubectl --context functional-558322 describe po hello-node-connect-7d85dfc575-qg7l8 -n default
functional_test.go:1645: (dbg) kubectl --context functional-558322 describe po hello-node-connect-7d85dfc575-qg7l8 -n default:
Name:             hello-node-connect-7d85dfc575-qg7l8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-558322/192.168.49.2
Start Time:       Fri, 17 Oct 2025 19:34:49 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j6xcg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j6xcg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qg7l8 to functional-558322
Normal   Pulling    7m5s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-558322 logs hello-node-connect-7d85dfc575-qg7l8 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-558322 logs hello-node-connect-7d85dfc575-qg7l8 -n default: exit status 1 (70.65593ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-qg7l8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-558322 logs hello-node-connect-7d85dfc575-qg7l8 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-558322 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-qg7l8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-558322/192.168.49.2
Start Time:       Fri, 17 Oct 2025 19:34:49 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j6xcg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j6xcg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qg7l8 to functional-558322
Normal   Pulling    7m5s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-558322 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-558322 logs -l app=hello-node-connect: exit status 1 (65.82266ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-qg7l8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-558322 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-558322 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.0.232
IPs:                      10.104.0.232
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31721/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-558322
helpers_test.go:243: (dbg) docker inspect functional-558322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "843b89e98c3078e4088271a6bf6a881a82521600af4a4554cb6eb55c42125550",
	        "Created": "2025-10-17T19:32:22.56367289Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 163411,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:32:22.608923753Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/843b89e98c3078e4088271a6bf6a881a82521600af4a4554cb6eb55c42125550/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/843b89e98c3078e4088271a6bf6a881a82521600af4a4554cb6eb55c42125550/hostname",
	        "HostsPath": "/var/lib/docker/containers/843b89e98c3078e4088271a6bf6a881a82521600af4a4554cb6eb55c42125550/hosts",
	        "LogPath": "/var/lib/docker/containers/843b89e98c3078e4088271a6bf6a881a82521600af4a4554cb6eb55c42125550/843b89e98c3078e4088271a6bf6a881a82521600af4a4554cb6eb55c42125550-json.log",
	        "Name": "/functional-558322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-558322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-558322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "843b89e98c3078e4088271a6bf6a881a82521600af4a4554cb6eb55c42125550",
	                "LowerDir": "/var/lib/docker/overlay2/980b213b48b2712cf06dd1ea2e7a94d1808c69632fcebd24a5eb41ecfd68755c-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/980b213b48b2712cf06dd1ea2e7a94d1808c69632fcebd24a5eb41ecfd68755c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/980b213b48b2712cf06dd1ea2e7a94d1808c69632fcebd24a5eb41ecfd68755c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/980b213b48b2712cf06dd1ea2e7a94d1808c69632fcebd24a5eb41ecfd68755c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-558322",
	                "Source": "/var/lib/docker/volumes/functional-558322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-558322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-558322",
	                "name.minikube.sigs.k8s.io": "functional-558322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a87640c35a29620a64104a531510635ec291e0538edf32ff6ec7f511b34edea0",
	            "SandboxKey": "/var/run/docker/netns/a87640c35a29",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-558322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:4f:1f:2a:3d:7b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7efaa79bb63b94a883942f6a747e97ce3d076ea0f69a1b45e08d800c775b2a97",
	                    "EndpointID": "516ae336b9110ef5b6517e18eeb2c858b5e5417b3d994b83a09aa22b240a5f80",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-558322",
	                        "843b89e98c30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-558322 -n functional-558322
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-558322 logs -n 25: (1.438918927s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-558322 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh            │ functional-558322 ssh -- ls -la /mount-9p                                                                          │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh            │ functional-558322 ssh sudo umount -f /mount-9p                                                                     │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	│ mount          │ -p functional-558322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3858166900/001:/mount2 --alsologtostderr -v=1 │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	│ ssh            │ functional-558322 ssh findmnt -T /mount1                                                                           │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	│ mount          │ -p functional-558322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3858166900/001:/mount3 --alsologtostderr -v=1 │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	│ mount          │ -p functional-558322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3858166900/001:/mount1 --alsologtostderr -v=1 │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	│ ssh            │ functional-558322 ssh findmnt -T /mount1                                                                           │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh            │ functional-558322 ssh findmnt -T /mount2                                                                           │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh            │ functional-558322 ssh findmnt -T /mount3                                                                           │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ mount          │ -p functional-558322 --kill=true                                                                                   │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	│ start          │ -p functional-558322 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	│ start          │ -p functional-558322 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	│ start          │ -p functional-558322 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-558322 --alsologtostderr -v=1                                                     │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:35 UTC │
	│ update-context │ functional-558322 update-context --alsologtostderr -v=2                                                            │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:35 UTC │ 17 Oct 25 19:35 UTC │
	│ update-context │ functional-558322 update-context --alsologtostderr -v=2                                                            │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:35 UTC │ 17 Oct 25 19:35 UTC │
	│ update-context │ functional-558322 update-context --alsologtostderr -v=2                                                            │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:35 UTC │ 17 Oct 25 19:35 UTC │
	│ image          │ functional-558322 image ls --format short --alsologtostderr                                                        │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:35 UTC │ 17 Oct 25 19:35 UTC │
	│ image          │ functional-558322 image ls --format yaml --alsologtostderr                                                         │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:35 UTC │ 17 Oct 25 19:35 UTC │
	│ ssh            │ functional-558322 ssh pgrep buildkitd                                                                              │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:35 UTC │                     │
	│ image          │ functional-558322 image build -t localhost/my-image:functional-558322 testdata/build --alsologtostderr             │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:35 UTC │ 17 Oct 25 19:35 UTC │
	│ image          │ functional-558322 image ls --format json --alsologtostderr                                                         │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:35 UTC │ 17 Oct 25 19:35 UTC │
	│ image          │ functional-558322 image ls --format table --alsologtostderr                                                        │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:35 UTC │ 17 Oct 25 19:35 UTC │
	│ image          │ functional-558322 image ls                                                                                         │ functional-558322 │ jenkins │ v1.37.0 │ 17 Oct 25 19:35 UTC │ 17 Oct 25 19:35 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:34:57
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:34:57.638318  178852 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:34:57.638566  178852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:34:57.638574  178852 out.go:374] Setting ErrFile to fd 2...
	I1017 19:34:57.638579  178852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:34:57.638786  178852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:34:57.639249  178852 out.go:368] Setting JSON to false
	I1017 19:34:57.640200  178852 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4646,"bootTime":1760725052,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:34:57.640310  178852 start.go:141] virtualization: kvm guest
	I1017 19:34:57.642446  178852 out.go:179] * [functional-558322] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:34:57.643973  178852 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 19:34:57.643978  178852 notify.go:220] Checking for updates...
	I1017 19:34:57.645576  178852 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:34:57.647245  178852 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 19:34:57.648894  178852 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 19:34:57.650399  178852 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:34:57.651944  178852 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:34:57.654067  178852 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:34:57.654861  178852 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:34:57.681655  178852 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:34:57.681788  178852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:34:57.741459  178852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-17 19:34:57.731210487 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:34:57.741571  178852 docker.go:318] overlay module found
	I1017 19:34:57.743822  178852 out.go:179] * Using the docker driver based on existing profile
	I1017 19:34:57.745245  178852 start.go:305] selected driver: docker
	I1017 19:34:57.745263  178852 start.go:925] validating driver "docker" against &{Name:functional-558322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558322 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:34:57.745364  178852 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:34:57.745446  178852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:34:57.804693  178852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-17 19:34:57.794132415 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:34:57.805380  178852 cni.go:84] Creating CNI manager for ""
	I1017 19:34:57.805440  178852 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:34:57.805483  178852 start.go:349] cluster config:
	{Name:functional-558322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:34:57.807812  178852 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 17 19:35:04 functional-558322 crio[3591]: time="2025-10-17T19:35:04.923692445Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2e9b5bad-e395-4122-b476-215e7eebec65 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:35:04 functional-558322 crio[3591]: time="2025-10-17T19:35:04.925461454Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=65579071-fc3b-44c6-a2cf-870e31773ea5 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:35:04 functional-558322 crio[3591]: time="2025-10-17T19:35:04.930218381Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5ppg7/kubernetes-dashboard" id=1639818c-d5ff-42fc-a799-9df44ee9ab72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:35:04 functional-558322 crio[3591]: time="2025-10-17T19:35:04.931079696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:35:04 functional-558322 crio[3591]: time="2025-10-17T19:35:04.935298403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:35:04 functional-558322 crio[3591]: time="2025-10-17T19:35:04.935470571Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/acc2a0e98787c1b77e059a1e5a4c26471f16ba75f5447a1eed8881cf33804d22/merged/etc/group: no such file or directory"
	Oct 17 19:35:04 functional-558322 crio[3591]: time="2025-10-17T19:35:04.935819103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:35:04 functional-558322 crio[3591]: time="2025-10-17T19:35:04.978441733Z" level=info msg="Created container 28abb0563b1be0462e44c9abd9656bd134a74ebabf5b1a69fecd87343ba53c9d: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5ppg7/kubernetes-dashboard" id=1639818c-d5ff-42fc-a799-9df44ee9ab72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:35:04 functional-558322 crio[3591]: time="2025-10-17T19:35:04.979279937Z" level=info msg="Starting container: 28abb0563b1be0462e44c9abd9656bd134a74ebabf5b1a69fecd87343ba53c9d" id=f4f4e5c9-7d9f-4899-85d0-8c5b6dc60bcb name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:35:04 functional-558322 crio[3591]: time="2025-10-17T19:35:04.981730732Z" level=info msg="Started container" PID=7736 containerID=28abb0563b1be0462e44c9abd9656bd134a74ebabf5b1a69fecd87343ba53c9d description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5ppg7/kubernetes-dashboard id=f4f4e5c9-7d9f-4899-85d0-8c5b6dc60bcb name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd538c6e16e13b4974080ec2af6f277dc3134426274b0f5f4fcfe27b94413a41
	Oct 17 19:35:05 functional-558322 crio[3591]: time="2025-10-17T19:35:05.707477951Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a98f405e-de6c-4563-b941-4707ee0d5e24 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:35:05 functional-558322 crio[3591]: time="2025-10-17T19:35:05.708160271Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=512b83fe-130b-47e6-bd4e-ceff336d54c5 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:35:30 functional-558322 crio[3591]: time="2025-10-17T19:35:30.707526482Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6cb177ff-aea1-43d8-b6bf-6cf7f9071e2b name=/runtime.v1.ImageService/PullImage
	Oct 17 19:35:32 functional-558322 crio[3591]: time="2025-10-17T19:35:32.707788165Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=10e30181-893e-49cf-b681-33bb3a642491 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:35:41 functional-558322 crio[3591]: time="2025-10-17T19:35:41.10412369Z" level=info msg="Stopping pod sandbox: 8f142337550cbb1d65a0dfc3bc864ffbea44ce15be9de5f904d8edf7a514bc1c" id=4b660141-2977-4ea6-9ee0-425ddea9ed38 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:35:41 functional-558322 crio[3591]: time="2025-10-17T19:35:41.104186605Z" level=info msg="Stopped pod sandbox (already stopped): 8f142337550cbb1d65a0dfc3bc864ffbea44ce15be9de5f904d8edf7a514bc1c" id=4b660141-2977-4ea6-9ee0-425ddea9ed38 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:35:41 functional-558322 crio[3591]: time="2025-10-17T19:35:41.104530809Z" level=info msg="Removing pod sandbox: 8f142337550cbb1d65a0dfc3bc864ffbea44ce15be9de5f904d8edf7a514bc1c" id=f52422a6-e163-4154-a9b0-b2cdae6c6812 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:35:41 functional-558322 crio[3591]: time="2025-10-17T19:35:41.107723426Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 19:35:41 functional-558322 crio[3591]: time="2025-10-17T19:35:41.107801161Z" level=info msg="Removed pod sandbox: 8f142337550cbb1d65a0dfc3bc864ffbea44ce15be9de5f904d8edf7a514bc1c" id=f52422a6-e163-4154-a9b0-b2cdae6c6812 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:36:12 functional-558322 crio[3591]: time="2025-10-17T19:36:12.707028787Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fcf16ef8-2034-46e9-ab84-91741098d29a name=/runtime.v1.ImageService/PullImage
	Oct 17 19:36:20 functional-558322 crio[3591]: time="2025-10-17T19:36:20.706827438Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3df71567-985d-4565-acef-85c9ac9c0d2b name=/runtime.v1.ImageService/PullImage
	Oct 17 19:37:45 functional-558322 crio[3591]: time="2025-10-17T19:37:45.7076223Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d325f426-953e-40a1-84d2-e4808e459814 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:37:51 functional-558322 crio[3591]: time="2025-10-17T19:37:51.707565567Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=be585019-c91f-4fe1-a8c7-a52d6b510d81 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:40:28 functional-558322 crio[3591]: time="2025-10-17T19:40:28.706887092Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=983a3580-301c-4a1b-9fce-1b9dd9258ce1 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:40:33 functional-558322 crio[3591]: time="2025-10-17T19:40:33.707735314Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cb7046a8-7c2c-435e-8f33-3710f5ec0031 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	28abb0563b1be       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   dd538c6e16e13       kubernetes-dashboard-855c9754f9-5ppg7        kubernetes-dashboard
	772da1830ff68       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   dcffed413cf99       dashboard-metrics-scraper-77bf4d6c4c-gbpw8   kubernetes-dashboard
	13ddd6bf7942c       docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115                  9 minutes ago       Running             myfrontend                  0                   1965cd122b6e4       sp-pod                                       default
	b5c7219a39668       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   669c1113df515       busybox-mount                                default
	ebc14f00eec5c       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   ec4fa4029dbc9       nginx-svc                                    default
	eca73eb1f512c       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   5371be8099d13       mysql-5bb876957f-kbvc9                       default
	8381049e5f40d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   c26a0a3f9995a       kube-controller-manager-functional-558322    kube-system
	8108ee42134ae       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              2                   d0bebe271424b       kube-apiserver-functional-558322             kube-system
	2e42f8e373eb2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 11 minutes ago      Exited              kube-apiserver              1                   d0bebe271424b       kube-apiserver-functional-558322             kube-system
	0b0246a41f37a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     1                   c26a0a3f9995a       kube-controller-manager-functional-558322    kube-system
	83ff232887aaf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Running             kube-scheduler              1                   a3c0e9aa26145       kube-scheduler-functional-558322             kube-system
	2f49cb0dd1e9c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Running             etcd                        1                   96a38c7d0b426       etcd-functional-558322                       kube-system
	b8bc473851a54       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   2a0748913bcd8       coredns-66bc5c9577-8cxmd                     kube-system
	141665699d00b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   17f35ba614e02       storage-provisioner                          kube-system
	d926e0824224c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   0e90ab4bd5255       kindnet-hm67v                                kube-system
	3b53b96a72ba6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   42ea4e6c49e90       kube-proxy-5kfhv                             kube-system
	adbbd826a3cca       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   2a0748913bcd8       coredns-66bc5c9577-8cxmd                     kube-system
	263c313fa8ebf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   17f35ba614e02       storage-provisioner                          kube-system
	07b9395201013       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   0e90ab4bd5255       kindnet-hm67v                                kube-system
	ba504ae1a5137       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 12 minutes ago      Exited              kube-proxy                  0                   42ea4e6c49e90       kube-proxy-5kfhv                             kube-system
	bdad34a8a5d33       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   96a38c7d0b426       etcd-functional-558322                       kube-system
	ed1a70c4dae68       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   a3c0e9aa26145       kube-scheduler-functional-558322             kube-system
	
	
	==> coredns [adbbd826a3ccab5532c158f633d112b530fa50af6ecbda38d2fd7fa4e9c35533] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45741 - 4747 "HINFO IN 4800723192862774835.4502851251895251221. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.085439579s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b8bc473851a546060212788f121907c389ae31889c839c0ff8b1d7a32c05012a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45010 - 25096 "HINFO IN 5059582837586449386.425431573033409085. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.104301958s
	
	
	==> describe nodes <==
	Name:               functional-558322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-558322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=functional-558322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_32_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-558322
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:44:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:43:31 +0000   Fri, 17 Oct 2025 19:32:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:43:31 +0000   Fri, 17 Oct 2025 19:32:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:43:31 +0000   Fri, 17 Oct 2025 19:32:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:43:31 +0000   Fri, 17 Oct 2025 19:32:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-558322
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                e3b9d2de-9e6e-4872-9dd0-42ee142614a6
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-lwxdl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-qg7l8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-kbvc9                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  kube-system                 coredns-66bc5c9577-8cxmd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-558322                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-hm67v                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-558322              250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-558322     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-5kfhv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-558322              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-gbpw8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5ppg7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-558322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-558322 status is now: NodeHasSufficientMemory
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-558322 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-558322 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-558322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-558322 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-558322 event: Registered Node functional-558322 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-558322 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-558322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-558322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-558322 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-558322 event: Registered Node functional-558322 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [2f49cb0dd1e9c6eb7bc03f0a843bfc8567211c7e3a25d6ed27175fddfc317ad0] <==
	{"level":"warn","ts":"2025-10-17T19:34:01.321778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.328291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.335890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.342650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.349387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.356350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.363919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.371006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.378420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.385786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.392142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.398933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.412095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.415826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.422879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.431268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:01.473127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40276","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:34:51.396428Z","caller":"traceutil/trace.go:172","msg":"trace[99353717] transaction","detail":"{read_only:false; response_revision:773; number_of_response:1; }","duration":"127.994154ms","start":"2025-10-17T19:34:51.268400Z","end":"2025-10-17T19:34:51.396394Z","steps":["trace[99353717] 'process raft request'  (duration: 60.579185ms)","trace[99353717] 'compare'  (duration: 67.225798ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:34:51.396521Z","caller":"traceutil/trace.go:172","msg":"trace[441672447] transaction","detail":"{read_only:false; response_revision:775; number_of_response:1; }","duration":"126.841304ms","start":"2025-10-17T19:34:51.269669Z","end":"2025-10-17T19:34:51.396510Z","steps":["trace[441672447] 'process raft request'  (duration: 126.770963ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:34:51.396681Z","caller":"traceutil/trace.go:172","msg":"trace[847205985] transaction","detail":"{read_only:false; response_revision:774; number_of_response:1; }","duration":"127.14268ms","start":"2025-10-17T19:34:51.269529Z","end":"2025-10-17T19:34:51.396672Z","steps":["trace[847205985] 'process raft request'  (duration: 126.859758ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:34:51.396448Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.586243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-10-17T19:34:51.396994Z","caller":"traceutil/trace.go:172","msg":"trace[327743180] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:772; }","duration":"115.112954ms","start":"2025-10-17T19:34:51.281826Z","end":"2025-10-17T19:34:51.396939Z","steps":["trace[327743180] 'agreement among raft nodes before linearized reading'  (duration: 47.109906ms)","trace[327743180] 'range keys from in-memory index tree'  (duration: 67.373176ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T19:44:01.007878Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1193}
	{"level":"info","ts":"2025-10-17T19:44:01.029174Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1193,"took":"20.871415ms","hash":493159978,"current-db-size-bytes":3624960,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1736704,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-10-17T19:44:01.029225Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":493159978,"revision":1193,"compact-revision":-1}
	
	
	==> etcd [bdad34a8a5d332846ffb1b3ceffe1bc0f9a6ac2567dad5b2adddf81eb32bc654] <==
	{"level":"warn","ts":"2025-10-17T19:32:35.091829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:32:35.099526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:32:35.106089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:32:35.113808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:32:35.124761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:32:35.131368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:32:35.140432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41244","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:33:20.085550Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-17T19:33:20.085645Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-558322","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-17T19:33:20.085733Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T19:33:27.087168Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T19:33:27.087364Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:33:27.087407Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-17T19:33:27.087440Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T19:33:27.087507Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-17T19:33:27.087511Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-10-17T19:33:27.087521Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:33:27.087527Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-17T19:33:27.087583Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T19:33:27.087600Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T19:33:27.087609Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:33:27.089915Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-17T19:33:27.089990Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:33:27.090067Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-17T19:33:27.090077Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-558322","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 19:44:51 up  1:27,  0 user,  load average: 0.12, 0.26, 0.76
	Linux functional-558322 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [07b939520101312216f07756fe171129f0000d1282f8806fa5844d82b6ec1660] <==
	I1017 19:32:44.008690       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:32:44.014559       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1017 19:32:44.014730       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:32:44.014767       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:32:44.014784       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:32:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:32:44.314441       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:32:44.314596       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:32:44.314615       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:32:44.314834       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:32:44.614883       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:32:44.614919       1 metrics.go:72] Registering metrics
	I1017 19:32:44.614984       1 controller.go:711] "Syncing nftables rules"
	I1017 19:32:54.224538       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:32:54.224636       1 main.go:301] handling current node
	I1017 19:33:04.226839       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:33:04.226896       1 main.go:301] handling current node
	I1017 19:33:14.227820       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:33:14.227861       1 main.go:301] handling current node
	
	
	==> kindnet [d926e0824224c7db48d32efa933eaf3c499f967b58dafb8016a3622f47d9f221] <==
	I1017 19:42:50.837784       1 main.go:301] handling current node
	I1017 19:43:00.844868       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:43:00.844903       1 main.go:301] handling current node
	I1017 19:43:10.838287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:43:10.838334       1 main.go:301] handling current node
	I1017 19:43:20.839431       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:43:20.839471       1 main.go:301] handling current node
	I1017 19:43:30.838638       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:43:30.838679       1 main.go:301] handling current node
	I1017 19:43:40.839872       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:43:40.839913       1 main.go:301] handling current node
	I1017 19:43:50.840931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:43:50.840974       1 main.go:301] handling current node
	I1017 19:44:00.837886       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:44:00.837926       1 main.go:301] handling current node
	I1017 19:44:10.837667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:44:10.837714       1 main.go:301] handling current node
	I1017 19:44:20.838846       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:44:20.838891       1 main.go:301] handling current node
	I1017 19:44:30.844880       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:44:30.844915       1 main.go:301] handling current node
	I1017 19:44:40.840174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:44:40.840207       1 main.go:301] handling current node
	I1017 19:44:50.838384       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:44:50.838437       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2e42f8e373eb202d0eeb349d86493af7b9986f6159c0edbef48160f01af8ec3c] <==
	I1017 19:33:40.866718       1 options.go:263] external host was not specified, using 192.168.49.2
	I1017 19:33:40.872228       1 server.go:150] Version: v1.34.1
	I1017 19:33:40.872274       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1017 19:33:40.872724       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [8108ee42134ae77e97c43252ded9b24df2465b150b90460d00e6149def89d866] <==
	I1017 19:34:01.976483       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:34:01.976863       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:34:02.855337       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1017 19:34:03.161844       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1017 19:34:03.163124       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:34:03.168200       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:34:04.807347       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:34:06.305578       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:34:28.863939       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.80.37"}
	I1017 19:34:34.124107       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.180.80"}
	I1017 19:34:34.168109       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:34:35.701928       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.74.132"}
	E1017 19:34:48.262265       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54298: use of closed network connection
	E1017 19:34:49.757955       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54314: use of closed network connection
	I1017 19:34:50.031228       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.0.232"}
	E1017 19:34:51.158318       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54374: use of closed network connection
	I1017 19:34:51.434261       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.56.71"}
	E1017 19:34:53.326179       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54414: use of closed network connection
	I1017 19:34:58.660959       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 19:34:58.715489       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:34:58.726935       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:34:58.783582       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.122.177"}
	I1017 19:34:58.797187       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.71.55"}
	E1017 19:35:02.209330       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:34212: use of closed network connection
	I1017 19:44:01.877713       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0b0246a41f37a77419848c6cc3d9faaade9d867eedf6ff2f4dd43ef4a60919a0] <==
	I1017 19:33:41.679716       1 shared_informer.go:349] "Waiting for caches to sync" controller="HPA"
	I1017 19:33:41.682049       1 controllermanager.go:781] "Started controller" controller="statefulset-controller"
	I1017 19:33:41.682259       1 stateful_set.go:169] "Starting stateful set controller" logger="statefulset-controller"
	I1017 19:33:41.682280       1 shared_informer.go:349] "Waiting for caches to sync" controller="stateful set"
	I1017 19:33:41.684858       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1017 19:33:41.684879       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-serving"
	I1017 19:33:41.684902       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1017 19:33:41.685340       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1017 19:33:41.685357       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I1017 19:33:41.685382       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1017 19:33:41.685773       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1017 19:33:41.685793       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 19:33:41.685815       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1017 19:33:41.686106       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1017 19:33:41.686255       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1017 19:33:41.686271       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I1017 19:33:41.686283       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1017 19:33:41.688531       1 controllermanager.go:781] "Started controller" controller="persistentvolume-binder-controller"
	I1017 19:33:41.688639       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1017 19:33:41.688651       1 shared_informer.go:349] "Waiting for caches to sync" controller="persistent volume"
	I1017 19:33:41.693878       1 shared_informer.go:356] "Caches are synced" controller="tokens"
	I1017 19:33:41.696614       1 controllermanager.go:781] "Started controller" controller="volumeattributesclass-protection-controller"
	I1017 19:33:41.696679       1 vac_protection_controller.go:206] "Starting VAC protection controller" logger="volumeattributesclass-protection-controller"
	I1017 19:33:41.696688       1 shared_informer.go:349] "Waiting for caches to sync" controller="VAC protection"
	F1017 19:33:41.744316       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/endpointslice-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [8381049e5f40d470a5da632c1722d781be3eeb28f16691cf4d02dc72b61e4dd7] <==
	I1017 19:34:06.200196       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:34:06.200261       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 19:34:06.200328       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 19:34:06.200333       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:34:06.200377       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:34:06.200266       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-558322"
	I1017 19:34:06.200440       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 19:34:06.200467       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 19:34:06.200412       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 19:34:06.200533       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:34:06.200574       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 19:34:06.201164       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 19:34:06.201801       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 19:34:06.205581       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:34:06.206796       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:34:06.219132       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 19:34:06.220410       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:34:06.221554       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:34:06.223723       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	E1017 19:34:58.719964       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1017 19:34:58.724334       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1017 19:34:58.726515       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1017 19:34:58.729579       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1017 19:34:58.731402       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1017 19:34:58.735566       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [3b53b96a72ba624d76a6e7796a439af85ff14b099f5f967a198cc6a6890eed8e] <==
	I1017 19:33:20.673249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:33:20.673304       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:33:20.679782       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:33:20.680120       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:33:20.680135       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:33:20.681377       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:33:20.681403       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:33:20.681423       1 config.go:200] "Starting service config controller"
	I1017 19:33:20.681429       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:33:20.681463       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:33:20.681491       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:33:20.681497       1 config.go:309] "Starting node config controller"
	I1017 19:33:20.681507       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:33:20.681514       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:33:20.782510       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:33:20.782557       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:33:20.782557       1 shared_informer.go:356] "Caches are synced" controller="service config"
	E1017 19:33:38.985568       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:33:38.985597       1 reflector.go:205] "Failed to watch" err="nodes \"functional-558322\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:33:38.985626       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1017 19:33:38.985710       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1017 19:33:41.736091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=518\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1017 19:33:46.824397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=518\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1017 19:33:57.046468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=518\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1017 19:34:01.882665       1 reflector.go:205] "Failed to watch" err="nodes \"functional-558322\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [ba504ae1a51372282e1e718da7308bdd4a7b0c02ba974a2d9e3b17a631455788] <==
	I1017 19:32:43.831382       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:32:43.892915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:32:43.994853       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:32:43.994924       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:32:43.995032       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:32:44.031111       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:32:44.031264       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:32:44.040572       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:32:44.041040       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:32:44.041125       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:32:44.044186       1 config.go:200] "Starting service config controller"
	I1017 19:32:44.044215       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:32:44.044363       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:32:44.044402       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:32:44.044897       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:32:44.044934       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:32:44.045114       1 config.go:309] "Starting node config controller"
	I1017 19:32:44.045128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:32:44.045136       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:32:44.144478       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:32:44.144480       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:32:44.145525       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [83ff232887aafb9bb6e9fc7bac5820112beb2ae6888372de7ce7064271591d98] <==
	E1017 19:33:50.799394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:33:50.970846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:33:50.997189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:33:51.427796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:33:51.588533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:33:51.655317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:33:56.880200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:33:57.387756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:33:57.815717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:33:58.516404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:33:58.605534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:33:58.691532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:33:58.870659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:33:58.948863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:33:59.008599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:33:59.135012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:33:59.410287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:33:59.540836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:33:59.791702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:33:59.875328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:33:59.983936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:34:00.104207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:34:00.695965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:34:01.866711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1017 19:34:15.375469       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ed1a70c4dae682078a7a1776dcec173248061f70451b939f4c9c97f773f2ee1e] <==
	E1017 19:32:36.030208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:32:36.030254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:32:36.030374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:32:36.030445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:32:36.030586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:32:36.030822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:32:36.031610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:32:36.031696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:32:36.031828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:32:36.031919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:32:36.031984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:32:36.031990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:32:36.032109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:32:36.874350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:32:36.913699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:32:36.944203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:32:36.948314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:32:36.949238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1017 19:32:38.627993       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:33:37.818407       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1017 19:33:37.818417       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:33:37.818516       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1017 19:33:37.818532       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1017 19:33:37.818554       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1017 19:33:37.818576       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 17 19:42:07 functional-558322 kubelet[4287]: E1017 19:42:07.707026    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:42:17 functional-558322 kubelet[4287]: E1017 19:42:17.706593    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	Oct 17 19:42:19 functional-558322 kubelet[4287]: E1017 19:42:19.707834    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:42:30 functional-558322 kubelet[4287]: E1017 19:42:30.706985    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:42:30 functional-558322 kubelet[4287]: E1017 19:42:30.707076    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	Oct 17 19:42:42 functional-558322 kubelet[4287]: E1017 19:42:42.707189    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:42:45 functional-558322 kubelet[4287]: E1017 19:42:45.706620    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	Oct 17 19:42:57 functional-558322 kubelet[4287]: E1017 19:42:57.707274    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:43:00 functional-558322 kubelet[4287]: E1017 19:43:00.707117    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	Oct 17 19:43:11 functional-558322 kubelet[4287]: E1017 19:43:11.706481    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	Oct 17 19:43:11 functional-558322 kubelet[4287]: E1017 19:43:11.706556    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:43:25 functional-558322 kubelet[4287]: E1017 19:43:25.707541    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:43:25 functional-558322 kubelet[4287]: E1017 19:43:25.707587    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	Oct 17 19:43:39 functional-558322 kubelet[4287]: E1017 19:43:39.707902    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	Oct 17 19:43:39 functional-558322 kubelet[4287]: E1017 19:43:39.707961    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:43:50 functional-558322 kubelet[4287]: E1017 19:43:50.707059    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:43:54 functional-558322 kubelet[4287]: E1017 19:43:54.707233    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	Oct 17 19:44:05 functional-558322 kubelet[4287]: E1017 19:44:05.707494    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:44:05 functional-558322 kubelet[4287]: E1017 19:44:05.707573    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	Oct 17 19:44:18 functional-558322 kubelet[4287]: E1017 19:44:18.706912    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:44:20 functional-558322 kubelet[4287]: E1017 19:44:20.706317    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	Oct 17 19:44:30 functional-558322 kubelet[4287]: E1017 19:44:30.706913    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:44:33 functional-558322 kubelet[4287]: E1017 19:44:33.706724    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	Oct 17 19:44:42 functional-558322 kubelet[4287]: E1017 19:44:42.706801    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-lwxdl" podUID="d3b88741-5fff-4431-8847-8cc912e170a2"
	Oct 17 19:44:47 functional-558322 kubelet[4287]: E1017 19:44:47.707515    4287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qg7l8" podUID="b595efd3-0d05-47f6-ab7a-c8779e7b27a7"
	
	
	==> kubernetes-dashboard [28abb0563b1be0462e44c9abd9656bd134a74ebabf5b1a69fecd87343ba53c9d] <==
	2025/10/17 19:35:05 Using namespace: kubernetes-dashboard
	2025/10/17 19:35:05 Using in-cluster config to connect to apiserver
	2025/10/17 19:35:05 Using secret token for csrf signing
	2025/10/17 19:35:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 19:35:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 19:35:05 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 19:35:05 Generating JWE encryption key
	2025/10/17 19:35:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 19:35:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 19:35:05 Initializing JWE encryption key from synchronized object
	2025/10/17 19:35:05 Creating in-cluster Sidecar client
	2025/10/17 19:35:05 Successful request to sidecar
	2025/10/17 19:35:05 Serving insecurely on HTTP port: 9090
	2025/10/17 19:35:05 Starting overwatch
	
	
	==> storage-provisioner [141665699d00b50a503af42ff3cce0e4f04351a0fb060b936c0ca92f7d38d8bf] <==
	W1017 19:44:27.759901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:29.763174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:29.768402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:31.771676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:31.776083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:33.778980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:33.785007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:35.788250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:35.792813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:37.796297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:37.800669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:39.804561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:39.810250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:41.813434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:41.817639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:43.821274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:43.826863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:45.830290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:45.834239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:47.837170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:47.842868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:49.846674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:49.851585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:51.855653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:44:51.859987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [263c313fa8ebf0e02974084be95f620ec9e29e8f871b9f12406a340d244cfabd] <==
	I1017 19:32:55.067651       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-558322_09b14cbf-9635-4b2f-864a-de3799df49ec!
	W1017 19:32:56.975848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:32:56.980369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:32:58.984180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:32:58.988958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:00.992820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:00.998814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:03.002849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:03.007089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:05.010467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:05.014959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:07.018905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:07.025076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:09.028505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:09.033903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:11.037272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:11.042365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:13.045483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:13.051368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:15.055122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:15.059409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:17.062959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:17.068734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:19.072247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:33:19.076978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-558322 -n functional-558322
helpers_test.go:269: (dbg) Run:  kubectl --context functional-558322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-lwxdl hello-node-connect-7d85dfc575-qg7l8
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-558322 describe pod busybox-mount hello-node-75c85bcc94-lwxdl hello-node-connect-7d85dfc575-qg7l8
helpers_test.go:290: (dbg) kubectl --context functional-558322 describe pod busybox-mount hello-node-75c85bcc94-lwxdl hello-node-connect-7d85dfc575-qg7l8:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-558322/192.168.49.2
	Start Time:       Fri, 17 Oct 2025 19:34:44 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b5c7219a39668ab0242630df3de6e71c6d4fd5f82ccd2d419a9fea6f49ed9309
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 17 Oct 2025 19:34:48 +0000
	      Finished:     Fri, 17 Oct 2025 19:34:48 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwsmc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-lwsmc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-558322
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.944s (4.147s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-lwxdl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-558322/192.168.49.2
	Start Time:       Fri, 17 Oct 2025 19:34:51 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-btjr8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-btjr8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-lwxdl to functional-558322
	  Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-qg7l8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-558322/192.168.49.2
	Start Time:       Fri, 17 Oct 2025 19:34:49 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j6xcg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j6xcg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qg7l8 to functional-558322
	  Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     5m2s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image load --daemon kicbase/echo-server:functional-558322 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-558322" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image load --daemon kicbase/echo-server:functional-558322 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-558322" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-558322
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image load --daemon kicbase/echo-server:functional-558322 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-558322 image load --daemon kicbase/echo-server:functional-558322 --alsologtostderr: (1.159703303s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-558322 image ls: (2.241898893s)
functional_test.go:461: expected "kicbase/echo-server:functional-558322" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image save kicbase/echo-server:functional-558322 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1017 19:34:41.747062  174709 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:34:41.747270  174709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:34:41.747284  174709 out.go:374] Setting ErrFile to fd 2...
	I1017 19:34:41.747289  174709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:34:41.747475  174709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:34:41.748116  174709 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:34:41.748210  174709 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:34:41.748600  174709 cli_runner.go:164] Run: docker container inspect functional-558322 --format={{.State.Status}}
	I1017 19:34:41.767716  174709 ssh_runner.go:195] Run: systemctl --version
	I1017 19:34:41.767834  174709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558322
	I1017 19:34:41.786823  174709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32899 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/functional-558322/id_rsa Username:docker}
	I1017 19:34:41.883766  174709 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1017 19:34:41.883832  174709 cache_images.go:254] Failed to load cached images for "functional-558322": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1017 19:34:41.883861  174709 cache_images.go:266] failed pushing to: functional-558322

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-558322
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image save --daemon kicbase/echo-server:functional-558322 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-558322
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-558322: exit status 1 (19.416373ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-558322

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-558322

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-558322 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-558322 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-lwxdl" [d3b88741-5fff-4431-8847-8cc912e170a2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-558322 -n functional-558322
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-17 19:44:51.777283032 +0000 UTC m=+1170.330240130
functional_test.go:1460: (dbg) Run:  kubectl --context functional-558322 describe po hello-node-75c85bcc94-lwxdl -n default
functional_test.go:1460: (dbg) kubectl --context functional-558322 describe po hello-node-75c85bcc94-lwxdl -n default:
Name:             hello-node-75c85bcc94-lwxdl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-558322/192.168.49.2
Start Time:       Fri, 17 Oct 2025 19:34:51 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-btjr8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-btjr8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-lwxdl to functional-558322
Normal   Pulling    7m (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m55s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m55s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-558322 logs hello-node-75c85bcc94-lwxdl -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-558322 logs hello-node-75c85bcc94-lwxdl -n default: exit status 1 (71.463338ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-lwxdl" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-558322 logs hello-node-75c85bcc94-lwxdl -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 service --namespace=default --https --url hello-node: exit status 115 (538.959292ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31515
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-558322 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 service hello-node --url --format={{.IP}}: exit status 115 (541.250431ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-558322 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 service hello-node --url: exit status 115 (545.63911ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31515
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-558322 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31515
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.34s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-415849 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-415849 --output=json --user=testUser: exit status 80 (2.340162357s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ab35018-2344-42b7-a6f6-1bca54e74c7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-415849 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"41993ff6-68e9-4546-8c6f-60afe51b5776","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-17T19:53:49Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"208b1347-2545-4de5-83bb-c08b67c8d6b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-415849 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.34s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.86s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-415849 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-415849 --output=json --user=testUser: exit status 80 (1.861980017s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4e74191c-192c-4624-bdf0-39d566ff50ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-415849 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"58621633-66bd-4cbb-8e13-a3dac70c1bcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-17T19:53:51Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"500244ee-1d4d-48b7-ba64-b373f5daca6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-415849 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.86s)

                                                
                                    
x
+
TestPause/serial/Pause (7.27s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-538803 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-538803 --alsologtostderr -v=5: exit status 80 (1.974915726s)

                                                
                                                
-- stdout --
	* Pausing node pause-538803 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:08:09.268064  326285 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:08:09.268302  326285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:09.268310  326285 out.go:374] Setting ErrFile to fd 2...
	I1017 20:08:09.268321  326285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:09.268553  326285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:08:09.268837  326285 out.go:368] Setting JSON to false
	I1017 20:08:09.268876  326285 mustload.go:65] Loading cluster: pause-538803
	I1017 20:08:09.269325  326285 config.go:182] Loaded profile config "pause-538803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:08:09.269933  326285 cli_runner.go:164] Run: docker container inspect pause-538803 --format={{.State.Status}}
	I1017 20:08:09.289280  326285 host.go:66] Checking if "pause-538803" exists ...
	I1017 20:08:09.289589  326285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:08:09.362287  326285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-17 20:08:09.351384117 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:08:09.363116  326285 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-538803 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:08:09.365144  326285 out.go:179] * Pausing node pause-538803 ... 
	I1017 20:08:09.366193  326285 host.go:66] Checking if "pause-538803" exists ...
	I1017 20:08:09.366496  326285 ssh_runner.go:195] Run: systemctl --version
	I1017 20:08:09.366548  326285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-538803
	I1017 20:08:09.390189  326285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/pause-538803/id_rsa Username:docker}
	I1017 20:08:09.503467  326285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:08:09.520673  326285 pause.go:52] kubelet running: true
	I1017 20:08:09.520857  326285 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:08:09.707309  326285 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:08:09.707415  326285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:08:09.801390  326285 cri.go:89] found id: "3ed90fee941cfc6b47aaaa62ee5f1e5de18e4b58ea3f2b426f3709c9d678036f"
	I1017 20:08:09.801415  326285 cri.go:89] found id: "239fbd7452fd92cc2603d099847edd02ad7e0c132f82de50e4e269a5e6b4f482"
	I1017 20:08:09.801420  326285 cri.go:89] found id: "95259cba17427cdbbfae053cd2b39d4b3c654df60e1e71078c392e0c7d14a921"
	I1017 20:08:09.801424  326285 cri.go:89] found id: "19631948f3c471e156cc03e91057d9598ae3a0f997e6aecab1d975d7d205c239"
	I1017 20:08:09.801428  326285 cri.go:89] found id: "6fcac3cb702e33b133c829840f399bc5c99ff4647b4574e33e26427fa5b3dbae"
	I1017 20:08:09.801433  326285 cri.go:89] found id: "7c25bf83c1b5316591d37c2782089bd05a37091827c8117997dbfc24c7de6219"
	I1017 20:08:09.801437  326285 cri.go:89] found id: "602df903b6e76bb500b5c63eff1e5965496e6a42559bca919255952ce7b32f06"
	I1017 20:08:09.801441  326285 cri.go:89] found id: ""
	I1017 20:08:09.801522  326285 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:08:09.818286  326285 retry.go:31] will retry after 350.189424ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:08:09Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:08:10.169642  326285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:08:10.185010  326285 pause.go:52] kubelet running: false
	I1017 20:08:10.185099  326285 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:08:10.320483  326285 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:08:10.320602  326285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:08:10.402710  326285 cri.go:89] found id: "3ed90fee941cfc6b47aaaa62ee5f1e5de18e4b58ea3f2b426f3709c9d678036f"
	I1017 20:08:10.402747  326285 cri.go:89] found id: "239fbd7452fd92cc2603d099847edd02ad7e0c132f82de50e4e269a5e6b4f482"
	I1017 20:08:10.402754  326285 cri.go:89] found id: "95259cba17427cdbbfae053cd2b39d4b3c654df60e1e71078c392e0c7d14a921"
	I1017 20:08:10.402760  326285 cri.go:89] found id: "19631948f3c471e156cc03e91057d9598ae3a0f997e6aecab1d975d7d205c239"
	I1017 20:08:10.402765  326285 cri.go:89] found id: "6fcac3cb702e33b133c829840f399bc5c99ff4647b4574e33e26427fa5b3dbae"
	I1017 20:08:10.402769  326285 cri.go:89] found id: "7c25bf83c1b5316591d37c2782089bd05a37091827c8117997dbfc24c7de6219"
	I1017 20:08:10.402780  326285 cri.go:89] found id: "602df903b6e76bb500b5c63eff1e5965496e6a42559bca919255952ce7b32f06"
	I1017 20:08:10.402784  326285 cri.go:89] found id: ""
	I1017 20:08:10.402831  326285 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:08:10.414872  326285 retry.go:31] will retry after 253.774399ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:08:10Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:08:10.669554  326285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:08:10.683968  326285 pause.go:52] kubelet running: false
	I1017 20:08:10.684038  326285 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:08:10.816541  326285 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:08:10.816610  326285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:08:10.893500  326285 cri.go:89] found id: "3ed90fee941cfc6b47aaaa62ee5f1e5de18e4b58ea3f2b426f3709c9d678036f"
	I1017 20:08:10.893527  326285 cri.go:89] found id: "239fbd7452fd92cc2603d099847edd02ad7e0c132f82de50e4e269a5e6b4f482"
	I1017 20:08:10.893534  326285 cri.go:89] found id: "95259cba17427cdbbfae053cd2b39d4b3c654df60e1e71078c392e0c7d14a921"
	I1017 20:08:10.893539  326285 cri.go:89] found id: "19631948f3c471e156cc03e91057d9598ae3a0f997e6aecab1d975d7d205c239"
	I1017 20:08:10.893544  326285 cri.go:89] found id: "6fcac3cb702e33b133c829840f399bc5c99ff4647b4574e33e26427fa5b3dbae"
	I1017 20:08:10.893549  326285 cri.go:89] found id: "7c25bf83c1b5316591d37c2782089bd05a37091827c8117997dbfc24c7de6219"
	I1017 20:08:10.893553  326285 cri.go:89] found id: "602df903b6e76bb500b5c63eff1e5965496e6a42559bca919255952ce7b32f06"
	I1017 20:08:10.893557  326285 cri.go:89] found id: ""
	I1017 20:08:10.893600  326285 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:08:10.998442  326285 out.go:203] 
	W1017 20:08:11.062852  326285 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:08:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:08:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:08:11.062877  326285 out.go:285] * 
	* 
	W1017 20:08:11.067079  326285 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:08:11.145759  326285 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-538803 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-538803
helpers_test.go:243: (dbg) docker inspect pause-538803:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b",
	        "Created": "2025-10-17T20:07:22.303547347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:07:22.816384144Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b/hosts",
	        "LogPath": "/var/lib/docker/containers/fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b/fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b-json.log",
	        "Name": "/pause-538803",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-538803:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-538803",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b",
	                "LowerDir": "/var/lib/docker/overlay2/488c7b258b4aab4a7ae003bdb5089c379981bc783d502690d206ac10d8ba5c5c-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/488c7b258b4aab4a7ae003bdb5089c379981bc783d502690d206ac10d8ba5c5c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/488c7b258b4aab4a7ae003bdb5089c379981bc783d502690d206ac10d8ba5c5c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/488c7b258b4aab4a7ae003bdb5089c379981bc783d502690d206ac10d8ba5c5c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-538803",
	                "Source": "/var/lib/docker/volumes/pause-538803/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-538803",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-538803",
	                "name.minikube.sigs.k8s.io": "pause-538803",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "55bf0d932333482d1bcc1f879ddc9e5f7657216ee8bc9175ccaac2a85d50af5c",
	            "SandboxKey": "/var/run/docker/netns/55bf0d932333",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-538803": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:ec:86:05:3b:76",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fbf0617e713532420ddbc60c83f1910107887c0c6aff3557126c73c2a3421d76",
	                    "EndpointID": "c00a8a32cb550b4ea3d83458603dba859d17a032d1c2b276cc7196d25bf1e4f2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-538803",
	                        "fe0cc8fb1393"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-538803 -n pause-538803
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-538803 -n pause-538803: exit status 2 (354.684615ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-538803 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-538803 logs -n 25: (2.02576221s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-910370 --cancel-scheduled                                                                           │ scheduled-stop-910370       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ stop    │ -p scheduled-stop-910370 --schedule 15s                                                                               │ scheduled-stop-910370       │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ stop    │ -p scheduled-stop-910370 --schedule 15s                                                                               │ scheduled-stop-910370       │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ stop    │ -p scheduled-stop-910370 --schedule 15s                                                                               │ scheduled-stop-910370       │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:06 UTC │
	│ delete  │ -p scheduled-stop-910370                                                                                              │ scheduled-stop-910370       │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:06 UTC │
	│ start   │ -p insufficient-storage-621455 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-621455 │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ delete  │ -p insufficient-storage-621455                                                                                        │ insufficient-storage-621455 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p NoKubernetes-275969 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ start   │ -p offline-crio-259515 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio     │ offline-crio-259515         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p pause-538803 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-538803                │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p NoKubernetes-275969 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p stopped-upgrade-289368 --memory=3072 --vm-driver=docker  --container-runtime=crio                                  │ stopped-upgrade-289368      │ jenkins │ v1.32.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p NoKubernetes-275969 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ stop    │ stopped-upgrade-289368 stop                                                                                           │ stopped-upgrade-289368      │ jenkins │ v1.32.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p stopped-upgrade-289368 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio              │ stopped-upgrade-289368      │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p NoKubernetes-275969                                                                                                │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p NoKubernetes-275969 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p pause-538803 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-538803                │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p stopped-upgrade-289368                                                                                             │ stopped-upgrade-289368      │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p offline-crio-259515                                                                                                │ offline-crio-259515         │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ ssh     │ -p NoKubernetes-275969 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ start   │ -p missing-upgrade-159057 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-159057      │ jenkins │ v1.32.0 │ 17 Oct 25 20:08 UTC │                     │
	│ pause   │ -p pause-538803 --alsologtostderr -v=5                                                                                │ pause-538803                │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ start   │ -p force-systemd-env-834947 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio            │ force-systemd-env-834947    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ stop    │ -p NoKubernetes-275969                                                                                                │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:08:09
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:08:09.483557  326509 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:08:09.483853  326509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:09.483864  326509 out.go:374] Setting ErrFile to fd 2...
	I1017 20:08:09.483868  326509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:09.484104  326509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:08:09.484672  326509 out.go:368] Setting JSON to false
	I1017 20:08:09.486108  326509 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6637,"bootTime":1760725052,"procs":268,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:08:09.486245  326509 start.go:141] virtualization: kvm guest
	I1017 20:08:09.488090  326509 out.go:179] * [force-systemd-env-834947] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:08:09.490035  326509 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:08:09.490047  326509 notify.go:220] Checking for updates...
	I1017 20:08:09.493663  326509 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:08:09.497002  326509 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:08:09.498922  326509 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:08:09.500594  326509 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:08:09.502087  326509 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1017 20:08:09.504128  326509 config.go:182] Loaded profile config "NoKubernetes-275969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1017 20:08:09.504238  326509 config.go:182] Loaded profile config "missing-upgrade-159057": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1017 20:08:09.504337  326509 config.go:182] Loaded profile config "pause-538803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:08:09.504434  326509 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:08:09.534470  326509 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:08:09.534560  326509 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:08:09.623644  326509 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-17 20:08:09.609835119 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:08:09.623844  326509 docker.go:318] overlay module found
	I1017 20:08:09.626458  326509 out.go:179] * Using the docker driver based on user configuration
	I1017 20:08:09.628023  326509 start.go:305] selected driver: docker
	I1017 20:08:09.628050  326509 start.go:925] validating driver "docker" against <nil>
	I1017 20:08:09.628064  326509 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:08:09.628807  326509 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:08:09.712810  326509 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-17 20:08:09.699749786 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:08:09.712998  326509 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:08:09.713262  326509 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 20:08:09.715689  326509 out.go:179] * Using Docker driver with root privileges
	I1017 20:08:09.717104  326509 cni.go:84] Creating CNI manager for ""
	I1017 20:08:09.717192  326509 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:08:09.717210  326509 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:08:09.717335  326509 start.go:349] cluster config:
	{Name:force-systemd-env-834947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-834947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:08:09.718924  326509 out.go:179] * Starting "force-systemd-env-834947" primary control-plane node in "force-systemd-env-834947" cluster
	I1017 20:08:09.720370  326509 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:08:09.721684  326509 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:08:09.723203  326509 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:08:09.723271  326509 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:08:09.723293  326509 cache.go:58] Caching tarball of preloaded images
	I1017 20:08:09.723335  326509 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:08:09.723416  326509 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:08:09.723432  326509 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:08:09.723580  326509 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/force-systemd-env-834947/config.json ...
	I1017 20:08:09.723614  326509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/force-systemd-env-834947/config.json: {Name:mk27776ceaf3163b99a2a43c63e279b97a0192b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:09.749047  326509 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:08:09.749079  326509 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:08:09.749102  326509 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:08:09.749147  326509 start.go:360] acquireMachinesLock for force-systemd-env-834947: {Name:mk217799f71d7455b56cc8b3bff2313c0808e0c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:08:09.749263  326509 start.go:364] duration metric: took 90.698µs to acquireMachinesLock for "force-systemd-env-834947"
	I1017 20:08:09.749295  326509 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-834947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-834947 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:08:09.749389  326509 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.878152349Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.878970293Z" level=info msg="Conmon does support the --sync option"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.878988228Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.87900218Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.879724273Z" level=info msg="Conmon does support the --sync option"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.87975965Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.883857271Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.883880782Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.884385175Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.884853037Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.884917161Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.890957022Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.939699383Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-6vcfs Namespace:kube-system ID:4b0b3a01f43d07815e75b5a421c278bb4c85b369bb51c3e04f886cc2104540f6 UID:76a7af6d-3452-4537-91e4-0b041c95be66 NetNS:/var/run/netns/9362b2cc-a7da-46bc-8e01-a25444be037d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000410010}] Aliases:map[]}"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.939919489Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-6vcfs for CNI network kindnet (type=ptp)"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940357373Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940380058Z" level=info msg="Starting seccomp notifier watcher"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940423177Z" level=info msg="Create NRI interface"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.94053685Z" level=info msg="built-in NRI default validator is disabled"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940550016Z" level=info msg="runtime interface created"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940559347Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940564364Z" level=info msg="runtime interface starting up..."
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940569599Z" level=info msg="starting plugins..."
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940580856Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940899454Z" level=info msg="No systemd watchdog enabled"
	Oct 17 20:08:05 pause-538803 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	3ed90fee941cf       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   4b0b3a01f43d0       coredns-66bc5c9577-6vcfs               kube-system
	239fbd7452fd9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   a06d44002364b       kindnet-rrb27                          kube-system
	95259cba17427       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   777e924392d8b       kube-proxy-h7qhn                       kube-system
	19631948f3c47       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Running             kube-apiserver            0                   7504e6207d769       kube-apiserver-pause-538803            kube-system
	6fcac3cb702e3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Running             etcd                      0                   4458de09b8c1a       etcd-pause-538803                      kube-system
	7c25bf83c1b53       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Running             kube-controller-manager   0                   af0913de64fd3       kube-controller-manager-pause-538803   kube-system
	602df903b6e76       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Running             kube-scheduler            0                   063f2ab7582ad       kube-scheduler-pause-538803            kube-system
	
	
	==> coredns [3ed90fee941cfc6b47aaaa62ee5f1e5de18e4b58ea3f2b426f3709c9d678036f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57349 - 13578 "HINFO IN 4241854040775296959.5790895250538457359. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067374177s
	
	
	==> describe nodes <==
	Name:               pause-538803
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-538803
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=pause-538803
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_07_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:07:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-538803
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:08:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:07:59 +0000   Fri, 17 Oct 2025 20:07:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:07:59 +0000   Fri, 17 Oct 2025 20:07:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:07:59 +0000   Fri, 17 Oct 2025 20:07:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:07:59 +0000   Fri, 17 Oct 2025 20:07:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-538803
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                d43089b4-8ee3-42e9-872e-987862edea0e
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6vcfs                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-538803                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-rrb27                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-538803             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-538803    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-h7qhn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-538803             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node pause-538803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node pause-538803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node pause-538803 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node pause-538803 event: Registered Node pause-538803 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-538803 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [6fcac3cb702e33b133c829840f399bc5c99ff4647b4574e33e26427fa5b3dbae] <==
	{"level":"warn","ts":"2025-10-17T20:07:39.315247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.323973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.333992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.342485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.353195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.361455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.368435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.377683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.386788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.396929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.410510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.420785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.429696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.437650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.445871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.454422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.463129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.470883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.482907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.492435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.499922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.515009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.520026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.530871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.539012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34622","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:08:13 up  1:50,  0 user,  load average: 4.39, 2.40, 1.68
	Linux pause-538803 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [239fbd7452fd92cc2603d099847edd02ad7e0c132f82de50e4e269a5e6b4f482] <==
	I1017 20:07:48.686768       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:07:48.687176       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:07:48.691198       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:07:48.691229       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:07:48.691262       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:07:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:07:48.892528       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:07:48.926187       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:07:48.926220       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:07:48.926406       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:07:49.126411       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:07:49.126579       1 metrics.go:72] Registering metrics
	I1017 20:07:49.126751       1 controller.go:711] "Syncing nftables rules"
	I1017 20:07:58.893808       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:07:58.893885       1 main.go:301] handling current node
	I1017 20:08:08.899869       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:08:08.899908       1 main.go:301] handling current node
	
	
	==> kube-apiserver [19631948f3c471e156cc03e91057d9598ae3a0f997e6aecab1d975d7d205c239] <==
	I1017 20:07:40.402372       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:07:40.416534       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:07:40.417819       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:07:40.416563       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:07:40.421686       1 controller.go:667] quota admission added evaluator for: namespaces
	E1017 20:07:40.428246       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1017 20:07:40.428390       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:07:40.599492       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:07:41.259924       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:07:41.266087       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:07:41.266114       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:07:42.061874       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:07:42.105733       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:07:42.173658       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:07:42.180645       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1017 20:07:42.181894       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:07:42.186322       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:07:42.294787       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:07:43.419678       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:07:43.430512       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:07:43.440613       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 20:07:47.298827       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:07:47.303505       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:07:47.998168       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 20:07:48.200028       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7c25bf83c1b5316591d37c2782089bd05a37091827c8117997dbfc24c7de6219] <==
	I1017 20:07:47.293505       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:07:47.293549       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:07:47.293516       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:07:47.294434       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:07:47.294466       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:07:47.294479       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:07:47.294487       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:07:47.294617       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 20:07:47.294893       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:07:47.294979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:07:47.295624       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:07:47.296987       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:07:47.297016       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:07:47.297033       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:07:47.297074       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:07:47.297097       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:07:47.297231       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:07:47.297615       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:07:47.298386       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 20:07:47.298833       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:07:47.298938       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:07:47.302242       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:07:47.309058       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:07:47.310363       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:08:02.245866       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [95259cba17427cdbbfae053cd2b39d4b3c654df60e1e71078c392e0c7d14a921] <==
	I1017 20:07:48.444974       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:07:48.499244       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:07:48.599908       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:07:48.599955       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:07:48.600034       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:07:48.621911       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:07:48.621957       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:07:48.627791       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:07:48.628135       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:07:48.628175       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:48.629684       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:07:48.629773       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:07:48.629836       1 config.go:309] "Starting node config controller"
	I1017 20:07:48.629891       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:07:48.629902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:07:48.629864       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:07:48.629911       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:07:48.629856       1 config.go:200] "Starting service config controller"
	I1017 20:07:48.629994       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:07:48.730388       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:07:48.730502       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:07:48.730510       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [602df903b6e76bb500b5c63eff1e5965496e6a42559bca919255952ce7b32f06] <==
	E1017 20:07:40.390278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:07:40.390325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:07:40.390336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:07:40.390398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:07:40.390402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:07:40.390456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:07:40.390487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:07:40.390625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:07:40.390869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:07:40.391165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:07:40.392133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:07:41.221142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:07:41.250338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:07:41.267477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:07:41.389146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:07:41.429676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:07:41.444723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:07:41.493484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:07:41.558155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:07:41.639444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:07:41.678071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:07:41.705736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:07:41.754614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 20:07:41.776775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1017 20:07:43.580481       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:07:44 pause-538803 kubelet[1291]: E1017 20:07:44.383072    1291 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-538803\" already exists" pod="kube-system/etcd-pause-538803"
	Oct 17 20:07:44 pause-538803 kubelet[1291]: I1017 20:07:44.393615    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-538803" podStartSLOduration=1.393596156 podStartE2EDuration="1.393596156s" podCreationTimestamp="2025-10-17 20:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:44.393514368 +0000 UTC m=+1.173817736" watchObservedRunningTime="2025-10-17 20:07:44.393596156 +0000 UTC m=+1.173899505"
	Oct 17 20:07:44 pause-538803 kubelet[1291]: I1017 20:07:44.401838    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-538803" podStartSLOduration=1.4018133609999999 podStartE2EDuration="1.401813361s" podCreationTimestamp="2025-10-17 20:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:44.401814076 +0000 UTC m=+1.182117442" watchObservedRunningTime="2025-10-17 20:07:44.401813361 +0000 UTC m=+1.182116732"
	Oct 17 20:07:44 pause-538803 kubelet[1291]: I1017 20:07:44.422134    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-538803" podStartSLOduration=1.422098941 podStartE2EDuration="1.422098941s" podCreationTimestamp="2025-10-17 20:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:44.410882715 +0000 UTC m=+1.191186105" watchObservedRunningTime="2025-10-17 20:07:44.422098941 +0000 UTC m=+1.202402309"
	Oct 17 20:07:44 pause-538803 kubelet[1291]: I1017 20:07:44.435943    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-538803" podStartSLOduration=1.435918488 podStartE2EDuration="1.435918488s" podCreationTimestamp="2025-10-17 20:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:44.422917315 +0000 UTC m=+1.203220681" watchObservedRunningTime="2025-10-17 20:07:44.435918488 +0000 UTC m=+1.216221856"
	Oct 17 20:07:47 pause-538803 kubelet[1291]: I1017 20:07:47.274223    1291 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 20:07:47 pause-538803 kubelet[1291]: I1017 20:07:47.275005    1291 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044315    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/65f6d7fa-8c43-4a78-8945-23e8be1181a0-kube-proxy\") pod \"kube-proxy-h7qhn\" (UID: \"65f6d7fa-8c43-4a78-8945-23e8be1181a0\") " pod="kube-system/kube-proxy-h7qhn"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044371    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65f6d7fa-8c43-4a78-8945-23e8be1181a0-lib-modules\") pod \"kube-proxy-h7qhn\" (UID: \"65f6d7fa-8c43-4a78-8945-23e8be1181a0\") " pod="kube-system/kube-proxy-h7qhn"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044394    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1-cni-cfg\") pod \"kindnet-rrb27\" (UID: \"3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1\") " pod="kube-system/kindnet-rrb27"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044424    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1-xtables-lock\") pod \"kindnet-rrb27\" (UID: \"3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1\") " pod="kube-system/kindnet-rrb27"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044445    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwh84\" (UniqueName: \"kubernetes.io/projected/3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1-kube-api-access-gwh84\") pod \"kindnet-rrb27\" (UID: \"3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1\") " pod="kube-system/kindnet-rrb27"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044470    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nwq7\" (UniqueName: \"kubernetes.io/projected/65f6d7fa-8c43-4a78-8945-23e8be1181a0-kube-api-access-9nwq7\") pod \"kube-proxy-h7qhn\" (UID: \"65f6d7fa-8c43-4a78-8945-23e8be1181a0\") " pod="kube-system/kube-proxy-h7qhn"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044490    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1-lib-modules\") pod \"kindnet-rrb27\" (UID: \"3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1\") " pod="kube-system/kindnet-rrb27"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044525    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65f6d7fa-8c43-4a78-8945-23e8be1181a0-xtables-lock\") pod \"kube-proxy-h7qhn\" (UID: \"65f6d7fa-8c43-4a78-8945-23e8be1181a0\") " pod="kube-system/kube-proxy-h7qhn"
	Oct 17 20:07:49 pause-538803 kubelet[1291]: I1017 20:07:49.400358    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rrb27" podStartSLOduration=1.400337218 podStartE2EDuration="1.400337218s" podCreationTimestamp="2025-10-17 20:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:49.400206337 +0000 UTC m=+6.180509705" watchObservedRunningTime="2025-10-17 20:07:49.400337218 +0000 UTC m=+6.180640587"
	Oct 17 20:07:49 pause-538803 kubelet[1291]: I1017 20:07:49.410321    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h7qhn" podStartSLOduration=1.410300673 podStartE2EDuration="1.410300673s" podCreationTimestamp="2025-10-17 20:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:49.410287418 +0000 UTC m=+6.190590811" watchObservedRunningTime="2025-10-17 20:07:49.410300673 +0000 UTC m=+6.190604043"
	Oct 17 20:07:59 pause-538803 kubelet[1291]: I1017 20:07:59.339096    1291 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 20:07:59 pause-538803 kubelet[1291]: I1017 20:07:59.435164    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ggnf\" (UniqueName: \"kubernetes.io/projected/76a7af6d-3452-4537-91e4-0b041c95be66-kube-api-access-6ggnf\") pod \"coredns-66bc5c9577-6vcfs\" (UID: \"76a7af6d-3452-4537-91e4-0b041c95be66\") " pod="kube-system/coredns-66bc5c9577-6vcfs"
	Oct 17 20:07:59 pause-538803 kubelet[1291]: I1017 20:07:59.435224    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a7af6d-3452-4537-91e4-0b041c95be66-config-volume\") pod \"coredns-66bc5c9577-6vcfs\" (UID: \"76a7af6d-3452-4537-91e4-0b041c95be66\") " pod="kube-system/coredns-66bc5c9577-6vcfs"
	Oct 17 20:08:00 pause-538803 kubelet[1291]: I1017 20:08:00.447729    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6vcfs" podStartSLOduration=12.447705983 podStartE2EDuration="12.447705983s" podCreationTimestamp="2025-10-17 20:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:08:00.431209018 +0000 UTC m=+17.211512386" watchObservedRunningTime="2025-10-17 20:08:00.447705983 +0000 UTC m=+17.228009351"
	Oct 17 20:08:09 pause-538803 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:08:09 pause-538803 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:08:09 pause-538803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 20:08:09 pause-538803 systemd[1]: kubelet.service: Consumed 1.243s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-538803 -n pause-538803
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-538803 -n pause-538803: exit status 2 (439.452081ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-538803 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-538803
helpers_test.go:243: (dbg) docker inspect pause-538803:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b",
	        "Created": "2025-10-17T20:07:22.303547347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:07:22.816384144Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b/hosts",
	        "LogPath": "/var/lib/docker/containers/fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b/fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b-json.log",
	        "Name": "/pause-538803",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-538803:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-538803",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe0cc8fb1393baa7af7ac2e56367e2b5aaf53bd6d8b0bb582992cea77ce45b5b",
	                "LowerDir": "/var/lib/docker/overlay2/488c7b258b4aab4a7ae003bdb5089c379981bc783d502690d206ac10d8ba5c5c-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/488c7b258b4aab4a7ae003bdb5089c379981bc783d502690d206ac10d8ba5c5c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/488c7b258b4aab4a7ae003bdb5089c379981bc783d502690d206ac10d8ba5c5c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/488c7b258b4aab4a7ae003bdb5089c379981bc783d502690d206ac10d8ba5c5c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-538803",
	                "Source": "/var/lib/docker/volumes/pause-538803/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-538803",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-538803",
	                "name.minikube.sigs.k8s.io": "pause-538803",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "55bf0d932333482d1bcc1f879ddc9e5f7657216ee8bc9175ccaac2a85d50af5c",
	            "SandboxKey": "/var/run/docker/netns/55bf0d932333",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-538803": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:ec:86:05:3b:76",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fbf0617e713532420ddbc60c83f1910107887c0c6aff3557126c73c2a3421d76",
	                    "EndpointID": "c00a8a32cb550b4ea3d83458603dba859d17a032d1c2b276cc7196d25bf1e4f2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-538803",
	                        "fe0cc8fb1393"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-538803 -n pause-538803
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-538803 -n pause-538803: exit status 2 (373.448408ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-538803 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-538803 logs -n 25: (1.430453419s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-910370 --schedule 15s                                                                               │ scheduled-stop-910370       │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ stop    │ -p scheduled-stop-910370 --schedule 15s                                                                               │ scheduled-stop-910370       │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ stop    │ -p scheduled-stop-910370 --schedule 15s                                                                               │ scheduled-stop-910370       │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:06 UTC │
	│ delete  │ -p scheduled-stop-910370                                                                                              │ scheduled-stop-910370       │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:06 UTC │
	│ start   │ -p insufficient-storage-621455 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-621455 │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ delete  │ -p insufficient-storage-621455                                                                                        │ insufficient-storage-621455 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p NoKubernetes-275969 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ start   │ -p offline-crio-259515 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio     │ offline-crio-259515         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p pause-538803 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-538803                │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p NoKubernetes-275969 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p stopped-upgrade-289368 --memory=3072 --vm-driver=docker  --container-runtime=crio                                  │ stopped-upgrade-289368      │ jenkins │ v1.32.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p NoKubernetes-275969 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ stop    │ stopped-upgrade-289368 stop                                                                                           │ stopped-upgrade-289368      │ jenkins │ v1.32.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p stopped-upgrade-289368 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio              │ stopped-upgrade-289368      │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p NoKubernetes-275969                                                                                                │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p NoKubernetes-275969 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p pause-538803 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-538803                │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p stopped-upgrade-289368                                                                                             │ stopped-upgrade-289368      │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p offline-crio-259515                                                                                                │ offline-crio-259515         │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ ssh     │ -p NoKubernetes-275969 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ start   │ -p missing-upgrade-159057 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-159057      │ jenkins │ v1.32.0 │ 17 Oct 25 20:08 UTC │                     │
	│ pause   │ -p pause-538803 --alsologtostderr -v=5                                                                                │ pause-538803                │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ start   │ -p force-systemd-env-834947 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio            │ force-systemd-env-834947    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ stop    │ -p NoKubernetes-275969                                                                                                │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p NoKubernetes-275969 --driver=docker  --container-runtime=crio                                                      │ NoKubernetes-275969         │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:08:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:08:13.226455  327995 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:08:13.226706  327995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:13.226710  327995 out.go:374] Setting ErrFile to fd 2...
	I1017 20:08:13.226713  327995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:13.226925  327995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:08:13.227398  327995 out.go:368] Setting JSON to false
	I1017 20:08:13.228527  327995 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6641,"bootTime":1760725052,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:08:13.228604  327995 start.go:141] virtualization: kvm guest
	I1017 20:08:13.231307  327995 out.go:179] * [NoKubernetes-275969] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:08:13.233508  327995 notify.go:220] Checking for updates...
	I1017 20:08:13.233564  327995 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:08:13.235594  327995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:08:13.237839  327995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:08:13.239838  327995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:08:13.241573  327995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:08:13.244022  327995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:08:13.246726  327995 config.go:182] Loaded profile config "NoKubernetes-275969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1017 20:08:13.247544  327995 start.go:1804] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1017 20:08:13.247572  327995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:08:13.276408  327995 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:08:13.276527  327995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:08:13.359600  327995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:false NGoroutines:84 SystemTime:2025-10-17 20:08:13.346559978 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:08:13.359712  327995 docker.go:318] overlay module found
	I1017 20:08:13.364299  327995 out.go:179] * Using the docker driver based on existing profile
	I1017 20:08:13.366486  327995 start.go:305] selected driver: docker
	I1017 20:08:13.366501  327995 start.go:925] validating driver "docker" against &{Name:NoKubernetes-275969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-275969 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:08:13.366608  327995 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:08:13.366733  327995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:08:13.448920  327995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:84 SystemTime:2025-10-17 20:08:13.43579179 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:08:13.449944  327995 cni.go:84] Creating CNI manager for ""
	I1017 20:08:13.449998  327995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:08:13.450041  327995 start.go:349] cluster config:
	{Name:NoKubernetes-275969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-275969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:08:13.453497  327995 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-275969
	I1017 20:08:13.455273  327995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:08:13.457111  327995 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:08:13.458671  327995 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1017 20:08:13.458800  327995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	W1017 20:08:13.480054  327995 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1017 20:08:13.486275  327995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:08:13.486310  327995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	W1017 20:08:13.518874  327995 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1017 20:08:13.519083  327995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/NoKubernetes-275969/config.json ...
	I1017 20:08:13.519330  327995 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:08:13.519360  327995 start.go:360] acquireMachinesLock for NoKubernetes-275969: {Name:mk5f0bcf54a9c081fec5dd8e8c53ae0c141ae9b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:08:13.519432  327995 start.go:364] duration metric: took 49.295µs to acquireMachinesLock for "NoKubernetes-275969"
	I1017 20:08:13.519444  327995 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:08:13.519448  327995 fix.go:54] fixHost starting: 
	I1017 20:08:13.519715  327995 cli_runner.go:164] Run: docker container inspect NoKubernetes-275969 --format={{.State.Status}}
	I1017 20:08:13.541513  327995 fix.go:112] recreateIfNeeded on NoKubernetes-275969: state=Stopped err=<nil>
	W1017 20:08:13.541541  327995 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:08:09.751672  326509 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:08:09.751993  326509 start.go:159] libmachine.API.Create for "force-systemd-env-834947" (driver="docker")
	I1017 20:08:09.752030  326509 client.go:168] LocalClient.Create starting
	I1017 20:08:09.752140  326509 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem
	I1017 20:08:09.752185  326509 main.go:141] libmachine: Decoding PEM data...
	I1017 20:08:09.752210  326509 main.go:141] libmachine: Parsing certificate...
	I1017 20:08:09.752305  326509 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem
	I1017 20:08:09.752335  326509 main.go:141] libmachine: Decoding PEM data...
	I1017 20:08:09.752351  326509 main.go:141] libmachine: Parsing certificate...
	I1017 20:08:09.752843  326509 cli_runner.go:164] Run: docker network inspect force-systemd-env-834947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:08:09.773668  326509 cli_runner.go:211] docker network inspect force-systemd-env-834947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:08:09.773759  326509 network_create.go:284] running [docker network inspect force-systemd-env-834947] to gather additional debugging logs...
	I1017 20:08:09.773782  326509 cli_runner.go:164] Run: docker network inspect force-systemd-env-834947
	W1017 20:08:09.794125  326509 cli_runner.go:211] docker network inspect force-systemd-env-834947 returned with exit code 1
	I1017 20:08:09.794165  326509 network_create.go:287] error running [docker network inspect force-systemd-env-834947]: docker network inspect force-systemd-env-834947: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-834947 not found
	I1017 20:08:09.794205  326509 network_create.go:289] output of [docker network inspect force-systemd-env-834947]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-834947 not found
	
	** /stderr **
	I1017 20:08:09.794701  326509 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:08:09.819025  326509 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d34a70da1174 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:b8:c9:c3:2e:b0} reservation:<nil>}
	I1017 20:08:09.819784  326509 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-07edace58173 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f3:28:2c:52:ce} reservation:<nil>}
	I1017 20:08:09.820582  326509 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a478249e8fe7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:51:65:8d:cb:60} reservation:<nil>}
	I1017 20:08:09.821346  326509 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fbf0617e7135 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:b3:ea:9a:07:06} reservation:<nil>}
	I1017 20:08:09.822405  326509 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001edfd80}
	I1017 20:08:09.822434  326509 network_create.go:124] attempt to create docker network force-systemd-env-834947 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1017 20:08:09.822501  326509 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-834947 force-systemd-env-834947
	I1017 20:08:09.892540  326509 network_create.go:108] docker network force-systemd-env-834947 192.168.85.0/24 created
	I1017 20:08:09.892581  326509 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-834947" container
	I1017 20:08:09.892655  326509 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:08:09.914103  326509 cli_runner.go:164] Run: docker volume create force-systemd-env-834947 --label name.minikube.sigs.k8s.io=force-systemd-env-834947 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:08:09.933776  326509 oci.go:103] Successfully created a docker volume force-systemd-env-834947
	I1017 20:08:09.933852  326509 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-834947-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-834947 --entrypoint /usr/bin/test -v force-systemd-env-834947:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:08:13.127183  326509 cli_runner.go:217] Completed: docker run --rm --name force-systemd-env-834947-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-834947 --entrypoint /usr/bin/test -v force-systemd-env-834947:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (3.193279412s)
	I1017 20:08:13.127236  326509 oci.go:107] Successfully prepared a docker volume force-systemd-env-834947
	I1017 20:08:13.127301  326509 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:08:13.127337  326509 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:08:13.127412  326509 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-834947:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.878152349Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.878970293Z" level=info msg="Conmon does support the --sync option"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.878988228Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.87900218Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.879724273Z" level=info msg="Conmon does support the --sync option"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.87975965Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.883857271Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.883880782Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.884385175Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.884853037Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.884917161Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.890957022Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.939699383Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-6vcfs Namespace:kube-system ID:4b0b3a01f43d07815e75b5a421c278bb4c85b369bb51c3e04f886cc2104540f6 UID:76a7af6d-3452-4537-91e4-0b041c95be66 NetNS:/var/run/netns/9362b2cc-a7da-46bc-8e01-a25444be037d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000410010}] Aliases:map[]}"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.939919489Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-6vcfs for CNI network kindnet (type=ptp)"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940357373Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940380058Z" level=info msg="Starting seccomp notifier watcher"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940423177Z" level=info msg="Create NRI interface"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.94053685Z" level=info msg="built-in NRI default validator is disabled"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940550016Z" level=info msg="runtime interface created"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940559347Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940564364Z" level=info msg="runtime interface starting up..."
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940569599Z" level=info msg="starting plugins..."
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940580856Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 17 20:08:05 pause-538803 crio[2122]: time="2025-10-17T20:08:05.940899454Z" level=info msg="No systemd watchdog enabled"
	Oct 17 20:08:05 pause-538803 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	3ed90fee941cf       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   4b0b3a01f43d0       coredns-66bc5c9577-6vcfs               kube-system
	239fbd7452fd9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   a06d44002364b       kindnet-rrb27                          kube-system
	95259cba17427       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   27 seconds ago      Running             kube-proxy                0                   777e924392d8b       kube-proxy-h7qhn                       kube-system
	19631948f3c47       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   38 seconds ago      Running             kube-apiserver            0                   7504e6207d769       kube-apiserver-pause-538803            kube-system
	6fcac3cb702e3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   38 seconds ago      Running             etcd                      0                   4458de09b8c1a       etcd-pause-538803                      kube-system
	7c25bf83c1b53       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   38 seconds ago      Running             kube-controller-manager   0                   af0913de64fd3       kube-controller-manager-pause-538803   kube-system
	602df903b6e76       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   38 seconds ago      Running             kube-scheduler            0                   063f2ab7582ad       kube-scheduler-pause-538803            kube-system
	
	
	==> coredns [3ed90fee941cfc6b47aaaa62ee5f1e5de18e4b58ea3f2b426f3709c9d678036f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57349 - 13578 "HINFO IN 4241854040775296959.5790895250538457359. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067374177s
	
	
	==> describe nodes <==
	Name:               pause-538803
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-538803
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=pause-538803
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_07_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:07:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-538803
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:08:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:07:59 +0000   Fri, 17 Oct 2025 20:07:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:07:59 +0000   Fri, 17 Oct 2025 20:07:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:07:59 +0000   Fri, 17 Oct 2025 20:07:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:07:59 +0000   Fri, 17 Oct 2025 20:07:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-538803
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                d43089b4-8ee3-42e9-872e-987862edea0e
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6vcfs                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-538803                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-rrb27                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-538803             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-538803    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-h7qhn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-538803             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node pause-538803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node pause-538803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node pause-538803 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node pause-538803 event: Registered Node pause-538803 in Controller
	  Normal  NodeReady                16s   kubelet          Node pause-538803 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [6fcac3cb702e33b133c829840f399bc5c99ff4647b4574e33e26427fa5b3dbae] <==
	{"level":"warn","ts":"2025-10-17T20:07:39.315247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.323973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.333992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.342485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.353195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.361455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.368435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.377683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.386788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.396929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.410510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.420785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.429696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.437650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.445871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.454422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.463129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.470883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.482907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.492435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.499922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.515009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.520026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.530871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:39.539012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34622","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:08:15 up  1:50,  0 user,  load average: 4.92, 2.54, 1.73
	Linux pause-538803 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [239fbd7452fd92cc2603d099847edd02ad7e0c132f82de50e4e269a5e6b4f482] <==
	I1017 20:07:48.686768       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:07:48.687176       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:07:48.691198       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:07:48.691229       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:07:48.691262       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:07:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:07:48.892528       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:07:48.926187       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:07:48.926220       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:07:48.926406       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:07:49.126411       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:07:49.126579       1 metrics.go:72] Registering metrics
	I1017 20:07:49.126751       1 controller.go:711] "Syncing nftables rules"
	I1017 20:07:58.893808       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:07:58.893885       1 main.go:301] handling current node
	I1017 20:08:08.899869       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:08:08.899908       1 main.go:301] handling current node
	
	
	==> kube-apiserver [19631948f3c471e156cc03e91057d9598ae3a0f997e6aecab1d975d7d205c239] <==
	I1017 20:07:40.402372       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:07:40.416534       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:07:40.417819       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:07:40.416563       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:07:40.421686       1 controller.go:667] quota admission added evaluator for: namespaces
	E1017 20:07:40.428246       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1017 20:07:40.428390       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:07:40.599492       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:07:41.259924       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:07:41.266087       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:07:41.266114       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:07:42.061874       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:07:42.105733       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:07:42.173658       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:07:42.180645       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1017 20:07:42.181894       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:07:42.186322       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:07:42.294787       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:07:43.419678       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:07:43.430512       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:07:43.440613       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 20:07:47.298827       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:07:47.303505       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:07:47.998168       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 20:07:48.200028       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7c25bf83c1b5316591d37c2782089bd05a37091827c8117997dbfc24c7de6219] <==
	I1017 20:07:47.293505       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:07:47.293549       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:07:47.293516       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:07:47.294434       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:07:47.294466       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:07:47.294479       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:07:47.294487       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:07:47.294617       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 20:07:47.294893       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:07:47.294979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:07:47.295624       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:07:47.296987       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:07:47.297016       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:07:47.297033       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:07:47.297074       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:07:47.297097       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:07:47.297231       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:07:47.297615       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:07:47.298386       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 20:07:47.298833       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:07:47.298938       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:07:47.302242       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:07:47.309058       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:07:47.310363       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:08:02.245866       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [95259cba17427cdbbfae053cd2b39d4b3c654df60e1e71078c392e0c7d14a921] <==
	I1017 20:07:48.444974       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:07:48.499244       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:07:48.599908       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:07:48.599955       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:07:48.600034       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:07:48.621911       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:07:48.621957       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:07:48.627791       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:07:48.628135       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:07:48.628175       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:48.629684       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:07:48.629773       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:07:48.629836       1 config.go:309] "Starting node config controller"
	I1017 20:07:48.629891       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:07:48.629902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:07:48.629864       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:07:48.629911       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:07:48.629856       1 config.go:200] "Starting service config controller"
	I1017 20:07:48.629994       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:07:48.730388       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:07:48.730502       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:07:48.730510       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [602df903b6e76bb500b5c63eff1e5965496e6a42559bca919255952ce7b32f06] <==
	E1017 20:07:40.390278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:07:40.390325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:07:40.390336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:07:40.390398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:07:40.390402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:07:40.390456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:07:40.390487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:07:40.390625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:07:40.390869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:07:40.391165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:07:40.392133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:07:41.221142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:07:41.250338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:07:41.267477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:07:41.389146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:07:41.429676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:07:41.444723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:07:41.493484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:07:41.558155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:07:41.639444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:07:41.678071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:07:41.705736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:07:41.754614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 20:07:41.776775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1017 20:07:43.580481       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:07:44 pause-538803 kubelet[1291]: E1017 20:07:44.383072    1291 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-538803\" already exists" pod="kube-system/etcd-pause-538803"
	Oct 17 20:07:44 pause-538803 kubelet[1291]: I1017 20:07:44.393615    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-538803" podStartSLOduration=1.393596156 podStartE2EDuration="1.393596156s" podCreationTimestamp="2025-10-17 20:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:44.393514368 +0000 UTC m=+1.173817736" watchObservedRunningTime="2025-10-17 20:07:44.393596156 +0000 UTC m=+1.173899505"
	Oct 17 20:07:44 pause-538803 kubelet[1291]: I1017 20:07:44.401838    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-538803" podStartSLOduration=1.4018133609999999 podStartE2EDuration="1.401813361s" podCreationTimestamp="2025-10-17 20:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:44.401814076 +0000 UTC m=+1.182117442" watchObservedRunningTime="2025-10-17 20:07:44.401813361 +0000 UTC m=+1.182116732"
	Oct 17 20:07:44 pause-538803 kubelet[1291]: I1017 20:07:44.422134    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-538803" podStartSLOduration=1.422098941 podStartE2EDuration="1.422098941s" podCreationTimestamp="2025-10-17 20:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:44.410882715 +0000 UTC m=+1.191186105" watchObservedRunningTime="2025-10-17 20:07:44.422098941 +0000 UTC m=+1.202402309"
	Oct 17 20:07:44 pause-538803 kubelet[1291]: I1017 20:07:44.435943    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-538803" podStartSLOduration=1.435918488 podStartE2EDuration="1.435918488s" podCreationTimestamp="2025-10-17 20:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:44.422917315 +0000 UTC m=+1.203220681" watchObservedRunningTime="2025-10-17 20:07:44.435918488 +0000 UTC m=+1.216221856"
	Oct 17 20:07:47 pause-538803 kubelet[1291]: I1017 20:07:47.274223    1291 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 20:07:47 pause-538803 kubelet[1291]: I1017 20:07:47.275005    1291 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044315    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/65f6d7fa-8c43-4a78-8945-23e8be1181a0-kube-proxy\") pod \"kube-proxy-h7qhn\" (UID: \"65f6d7fa-8c43-4a78-8945-23e8be1181a0\") " pod="kube-system/kube-proxy-h7qhn"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044371    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65f6d7fa-8c43-4a78-8945-23e8be1181a0-lib-modules\") pod \"kube-proxy-h7qhn\" (UID: \"65f6d7fa-8c43-4a78-8945-23e8be1181a0\") " pod="kube-system/kube-proxy-h7qhn"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044394    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1-cni-cfg\") pod \"kindnet-rrb27\" (UID: \"3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1\") " pod="kube-system/kindnet-rrb27"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044424    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1-xtables-lock\") pod \"kindnet-rrb27\" (UID: \"3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1\") " pod="kube-system/kindnet-rrb27"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044445    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwh84\" (UniqueName: \"kubernetes.io/projected/3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1-kube-api-access-gwh84\") pod \"kindnet-rrb27\" (UID: \"3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1\") " pod="kube-system/kindnet-rrb27"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044470    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nwq7\" (UniqueName: \"kubernetes.io/projected/65f6d7fa-8c43-4a78-8945-23e8be1181a0-kube-api-access-9nwq7\") pod \"kube-proxy-h7qhn\" (UID: \"65f6d7fa-8c43-4a78-8945-23e8be1181a0\") " pod="kube-system/kube-proxy-h7qhn"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044490    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1-lib-modules\") pod \"kindnet-rrb27\" (UID: \"3f4b0eb5-77c2-4653-96bb-d0bfc6c38ad1\") " pod="kube-system/kindnet-rrb27"
	Oct 17 20:07:48 pause-538803 kubelet[1291]: I1017 20:07:48.044525    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65f6d7fa-8c43-4a78-8945-23e8be1181a0-xtables-lock\") pod \"kube-proxy-h7qhn\" (UID: \"65f6d7fa-8c43-4a78-8945-23e8be1181a0\") " pod="kube-system/kube-proxy-h7qhn"
	Oct 17 20:07:49 pause-538803 kubelet[1291]: I1017 20:07:49.400358    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rrb27" podStartSLOduration=1.400337218 podStartE2EDuration="1.400337218s" podCreationTimestamp="2025-10-17 20:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:49.400206337 +0000 UTC m=+6.180509705" watchObservedRunningTime="2025-10-17 20:07:49.400337218 +0000 UTC m=+6.180640587"
	Oct 17 20:07:49 pause-538803 kubelet[1291]: I1017 20:07:49.410321    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h7qhn" podStartSLOduration=1.410300673 podStartE2EDuration="1.410300673s" podCreationTimestamp="2025-10-17 20:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:49.410287418 +0000 UTC m=+6.190590811" watchObservedRunningTime="2025-10-17 20:07:49.410300673 +0000 UTC m=+6.190604043"
	Oct 17 20:07:59 pause-538803 kubelet[1291]: I1017 20:07:59.339096    1291 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 20:07:59 pause-538803 kubelet[1291]: I1017 20:07:59.435164    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ggnf\" (UniqueName: \"kubernetes.io/projected/76a7af6d-3452-4537-91e4-0b041c95be66-kube-api-access-6ggnf\") pod \"coredns-66bc5c9577-6vcfs\" (UID: \"76a7af6d-3452-4537-91e4-0b041c95be66\") " pod="kube-system/coredns-66bc5c9577-6vcfs"
	Oct 17 20:07:59 pause-538803 kubelet[1291]: I1017 20:07:59.435224    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a7af6d-3452-4537-91e4-0b041c95be66-config-volume\") pod \"coredns-66bc5c9577-6vcfs\" (UID: \"76a7af6d-3452-4537-91e4-0b041c95be66\") " pod="kube-system/coredns-66bc5c9577-6vcfs"
	Oct 17 20:08:00 pause-538803 kubelet[1291]: I1017 20:08:00.447729    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6vcfs" podStartSLOduration=12.447705983 podStartE2EDuration="12.447705983s" podCreationTimestamp="2025-10-17 20:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:08:00.431209018 +0000 UTC m=+17.211512386" watchObservedRunningTime="2025-10-17 20:08:00.447705983 +0000 UTC m=+17.228009351"
	Oct 17 20:08:09 pause-538803 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:08:09 pause-538803 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:08:09 pause-538803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 20:08:09 pause-538803 systemd[1]: kubelet.service: Consumed 1.243s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-538803 -n pause-538803
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-538803 -n pause-538803: exit status 2 (341.379875ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-538803 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-726816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-726816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (242.410992ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:10:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-726816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-726816 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-726816 describe deploy/metrics-server -n kube-system: exit status 1 (59.57723ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-726816 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-726816
helpers_test.go:243: (dbg) docker inspect old-k8s-version-726816:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d",
	        "Created": "2025-10-17T20:09:36.13713151Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 354865,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:09:36.184929534Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/hostname",
	        "HostsPath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/hosts",
	        "LogPath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d-json.log",
	        "Name": "/old-k8s-version-726816",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-726816:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-726816",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d",
	                "LowerDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-726816",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-726816/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-726816",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-726816",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-726816",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "086f413f7935c3c055c60d361042b458d96d66b6804dcac60ce44492e45322f0",
	            "SandboxKey": "/var/run/docker/netns/086f413f7935",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-726816": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:ed:b8:26:63:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a2f3c9774d269d6de3a98b72179a7362d7a29c679daa09f837b76252bd896b76",
	                    "EndpointID": "01ee976001001213b5bf7e5106aed98c6cf30b09529726ca10dcd4c63d44eab2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-726816",
	                        "5fe53cd658e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-726816 -n old-k8s-version-726816
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-726816 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-726816 logs -n 25: (1.156385059s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-684669 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo containerd config dump                                                                                                                                                                                                  │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo crio config                                                                                                                                                                                                             │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ delete  │ -p cilium-684669                                                                                                                                                                                                                              │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p running-upgrade-097245 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                                          │ running-upgrade-097245    │ jenkins │ v1.32.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p force-systemd-env-834947                                                                                                                                                                                                                   │ force-systemd-env-834947  │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p cert-expiration-202048 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-202048    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p missing-upgrade-159057 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-159057    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ stop    │ -p kubernetes-upgrade-660693                                                                                                                                                                                                                  │ kubernetes-upgrade-660693 │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-660693 │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ start   │ -p running-upgrade-097245 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p missing-upgrade-159057                                                                                                                                                                                                                     │ missing-upgrade-159057    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p force-systemd-flag-599050 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p running-upgrade-097245                                                                                                                                                                                                                     │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ ssh     │ force-systemd-flag-599050 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p force-systemd-flag-599050                                                                                                                                                                                                                  │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-726816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:09:51
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:09:51.379814  357835 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:09:51.379975  357835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:51.379994  357835 out.go:374] Setting ErrFile to fd 2...
	I1017 20:09:51.380001  357835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:51.380503  357835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:09:51.381279  357835 out.go:368] Setting JSON to false
	I1017 20:09:51.383508  357835 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6739,"bootTime":1760725052,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:09:51.383675  357835 start.go:141] virtualization: kvm guest
	I1017 20:09:51.389971  357835 out.go:179] * [no-preload-449580] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:09:51.392045  357835 notify.go:220] Checking for updates...
	I1017 20:09:51.392225  357835 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:09:51.394307  357835 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:09:51.395902  357835 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:09:51.397369  357835 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:09:51.399068  357835 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:09:51.400704  357835 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:09:51.403115  357835 config.go:182] Loaded profile config "cert-expiration-202048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:51.403271  357835 config.go:182] Loaded profile config "kubernetes-upgrade-660693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:51.403420  357835 config.go:182] Loaded profile config "old-k8s-version-726816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:09:51.403565  357835 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:09:51.430970  357835 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:09:51.431143  357835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:51.495645  357835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:09:51.484655947 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:09:51.495774  357835 docker.go:318] overlay module found
	I1017 20:09:51.497844  357835 out.go:179] * Using the docker driver based on user configuration
	I1017 20:09:51.499388  357835 start.go:305] selected driver: docker
	I1017 20:09:51.499412  357835 start.go:925] validating driver "docker" against <nil>
	I1017 20:09:51.499428  357835 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:09:51.500212  357835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:51.574239  357835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:09:51.560476702 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:09:51.574537  357835 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:09:51.574857  357835 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:09:51.576795  357835 out.go:179] * Using Docker driver with root privileges
	I1017 20:09:51.578154  357835 cni.go:84] Creating CNI manager for ""
	I1017 20:09:51.578252  357835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:51.578270  357835 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:09:51.578466  357835 start.go:349] cluster config:
	{Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:09:51.580355  357835 out.go:179] * Starting "no-preload-449580" primary control-plane node in "no-preload-449580" cluster
	I1017 20:09:51.581815  357835 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:09:51.583510  357835 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:09:51.588901  357835 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:51.589036  357835 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:09:51.589080  357835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/config.json ...
	I1017 20:09:51.589116  357835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/config.json: {Name:mke9d0e66fefbe1620d959334a3157ace24326b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:51.589252  357835 cache.go:107] acquiring lock: {Name:mkd0df842d4d8da119c6855ae5b215973a1bd054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:51.589254  357835 cache.go:107] acquiring lock: {Name:mk58620b56df75044fc4da2f75d8900d628a7966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:51.589269  357835 cache.go:107] acquiring lock: {Name:mk495930b32aab4137b78173fcb5d9cf58d8239c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:51.589324  357835 cache.go:107] acquiring lock: {Name:mk1e16df1578e3f66034d7e28be03b6ac01b470a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:51.589355  357835 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1017 20:09:51.589367  357835 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 130.209µs
	I1017 20:09:51.589335  357835 cache.go:107] acquiring lock: {Name:mkb1ea73854f03abddddc66ea6d8ff48980053b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:51.589356  357835 cache.go:107] acquiring lock: {Name:mk47a558c7bfc49677b52c17a6cb39d0217750ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:51.589382  357835 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1017 20:09:51.589336  357835 cache.go:107] acquiring lock: {Name:mk79978b0094a0a4fe274208f9bd0f469915fa13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:51.589435  357835 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1017 20:09:51.589391  357835 cache.go:107] acquiring lock: {Name:mk95a64393bf585bd3acb10c28b2e4486b82554a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:51.589474  357835 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1017 20:09:51.589514  357835 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1017 20:09:51.589542  357835 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 20:09:51.589579  357835 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1017 20:09:51.589594  357835 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1017 20:09:51.589830  357835 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1017 20:09:51.591387  357835 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1017 20:09:51.591442  357835 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1017 20:09:51.591482  357835 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 20:09:51.591501  357835 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1017 20:09:51.591391  357835 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1017 20:09:51.591935  357835 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1017 20:09:51.592381  357835 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1017 20:09:51.628324  357835 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:09:51.628349  357835 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:09:51.628365  357835 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:09:51.628397  357835 start.go:360] acquireMachinesLock for no-preload-449580: {Name:mk19bcf32a0d1bfb1bd4e113ba01604af981e85e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:51.628502  357835 start.go:364] duration metric: took 84.392µs to acquireMachinesLock for "no-preload-449580"
	I1017 20:09:51.628535  357835 start.go:93] Provisioning new machine with config: &{Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:09:51.628614  357835 start.go:125] createHost starting for "" (driver="docker")
	I1017 20:09:51.725073  353504 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1017 20:09:51.725152  353504 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:09:51.725267  353504 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:09:51.725354  353504 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 20:09:51.725402  353504 kubeadm.go:318] OS: Linux
	I1017 20:09:51.725460  353504 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:09:51.725522  353504 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:09:51.725585  353504 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:09:51.725664  353504 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:09:51.725727  353504 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:09:51.725805  353504 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:09:51.725871  353504 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:09:51.726037  353504 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 20:09:51.726131  353504 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:09:51.726261  353504 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:09:51.726592  353504 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1017 20:09:51.726680  353504 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 20:09:51.729247  353504 out.go:252]   - Generating certificates and keys ...
	I1017 20:09:51.729368  353504 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:09:51.729457  353504 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:09:51.729549  353504 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:09:51.729625  353504 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:09:51.729700  353504 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:09:51.729965  353504 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:09:51.730050  353504 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:09:51.730220  353504 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-726816] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1017 20:09:51.730288  353504 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:09:51.730450  353504 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-726816] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1017 20:09:51.730524  353504 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:09:51.730604  353504 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:09:51.730657  353504 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:09:51.730729  353504 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:09:51.730828  353504 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:09:51.730895  353504 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:09:51.730969  353504 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:09:51.731052  353504 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:09:51.731149  353504 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:09:51.731223  353504 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:09:51.732825  353504 out.go:252]   - Booting up control plane ...
	I1017 20:09:51.733005  353504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:09:51.733151  353504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:09:51.733257  353504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:09:51.733420  353504 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:09:51.733550  353504 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:09:51.733615  353504 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:09:51.733864  353504 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1017 20:09:51.734015  353504 kubeadm.go:318] [apiclient] All control plane components are healthy after 4.502769 seconds
	I1017 20:09:51.734127  353504 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:09:51.734248  353504 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:09:51.734319  353504 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:09:51.734598  353504 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-726816 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:09:51.734672  353504 kubeadm.go:318] [bootstrap-token] Using token: b4zc16.m073iq0a7t38zvah
	I1017 20:09:51.736433  353504 out.go:252]   - Configuring RBAC rules ...
	I1017 20:09:51.736529  353504 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:09:51.736597  353504 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:09:51.736714  353504 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:09:51.736885  353504 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:09:51.736981  353504 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:09:51.737075  353504 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:09:51.737200  353504 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:09:51.737267  353504 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:09:51.737345  353504 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:09:51.737355  353504 kubeadm.go:318] 
	I1017 20:09:51.737439  353504 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:09:51.737465  353504 kubeadm.go:318] 
	I1017 20:09:51.737584  353504 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:09:51.737597  353504 kubeadm.go:318] 
	I1017 20:09:51.737633  353504 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:09:51.737714  353504 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:09:51.737793  353504 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:09:51.737803  353504 kubeadm.go:318] 
	I1017 20:09:51.737876  353504 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:09:51.737885  353504 kubeadm.go:318] 
	I1017 20:09:51.737950  353504 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:09:51.737960  353504 kubeadm.go:318] 
	I1017 20:09:51.738101  353504 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:09:51.738221  353504 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:09:51.738335  353504 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:09:51.738347  353504 kubeadm.go:318] 
	I1017 20:09:51.738413  353504 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:09:51.738475  353504 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:09:51.738480  353504 kubeadm.go:318] 
	I1017 20:09:51.738625  353504 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token b4zc16.m073iq0a7t38zvah \
	I1017 20:09:51.738807  353504 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 \
	I1017 20:09:51.738830  353504 kubeadm.go:318] 	--control-plane 
	I1017 20:09:51.738834  353504 kubeadm.go:318] 
	I1017 20:09:51.738953  353504 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:09:51.738967  353504 kubeadm.go:318] 
	I1017 20:09:51.739045  353504 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token b4zc16.m073iq0a7t38zvah \
	I1017 20:09:51.739151  353504 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 
	I1017 20:09:51.739163  353504 cni.go:84] Creating CNI manager for ""
	I1017 20:09:51.739169  353504 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:51.740839  353504 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:09:51.105933  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1017 20:09:51.105995  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:51.268696  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:41806->192.168.76.2:8443: read: connection reset by peer
	I1017 20:09:51.598837  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:51.599206  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:52.098919  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:52.099474  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:52.598895  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:52.599346  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:51.742362  353504 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:09:51.746863  353504 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1017 20:09:51.746888  353504 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:09:51.761048  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:09:52.552952  353504 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:09:52.553020  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:52.553020  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-726816 minikube.k8s.io/updated_at=2025_10_17T20_09_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=old-k8s-version-726816 minikube.k8s.io/primary=true
	I1017 20:09:52.565214  353504 ops.go:34] apiserver oom_adj: -16
	I1017 20:09:52.638090  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:53.138564  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:53.639134  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:54.139036  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:54.638975  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:55.138800  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:55.638570  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:51.631862  357835 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:09:51.633820  357835 start.go:159] libmachine.API.Create for "no-preload-449580" (driver="docker")
	I1017 20:09:51.633911  357835 client.go:168] LocalClient.Create starting
	I1017 20:09:51.634034  357835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem
	I1017 20:09:51.634126  357835 main.go:141] libmachine: Decoding PEM data...
	I1017 20:09:51.634150  357835 main.go:141] libmachine: Parsing certificate...
	I1017 20:09:51.634242  357835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem
	I1017 20:09:51.634281  357835 main.go:141] libmachine: Decoding PEM data...
	I1017 20:09:51.634295  357835 main.go:141] libmachine: Parsing certificate...
	I1017 20:09:51.634956  357835 cli_runner.go:164] Run: docker network inspect no-preload-449580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:09:51.655611  357835 cli_runner.go:211] docker network inspect no-preload-449580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:09:51.655677  357835 network_create.go:284] running [docker network inspect no-preload-449580] to gather additional debugging logs...
	I1017 20:09:51.655749  357835 cli_runner.go:164] Run: docker network inspect no-preload-449580
	W1017 20:09:51.673818  357835 cli_runner.go:211] docker network inspect no-preload-449580 returned with exit code 1
	I1017 20:09:51.673866  357835 network_create.go:287] error running [docker network inspect no-preload-449580]: docker network inspect no-preload-449580: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-449580 not found
	I1017 20:09:51.673882  357835 network_create.go:289] output of [docker network inspect no-preload-449580]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-449580 not found
	
	** /stderr **
	I1017 20:09:51.673982  357835 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:09:51.694246  357835 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d34a70da1174 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:b8:c9:c3:2e:b0} reservation:<nil>}
	I1017 20:09:51.694671  357835 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-07edace58173 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f3:28:2c:52:ce} reservation:<nil>}
	I1017 20:09:51.695088  357835 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a478249e8fe7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:51:65:8d:cb:60} reservation:<nil>}
	I1017 20:09:51.695470  357835 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7ed8ef1bc0a4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:6a:98:d7:e8:28} reservation:<nil>}
	I1017 20:09:51.695941  357835 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-23fdbb5d6173 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0a:e3:cd:1a:06:d9} reservation:<nil>}
	I1017 20:09:51.696450  357835 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-a2f3c9774d26 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:a6:40:e1:d5:0a:cd} reservation:<nil>}
	I1017 20:09:51.697321  357835 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000342810}
	I1017 20:09:51.697354  357835 network_create.go:124] attempt to create docker network no-preload-449580 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1017 20:09:51.697414  357835 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-449580 no-preload-449580
	I1017 20:09:51.743029  357835 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1017 20:09:51.761731  357835 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1017 20:09:51.767175  357835 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1017 20:09:51.769521  357835 network_create.go:108] docker network no-preload-449580 192.168.103.0/24 created
	I1017 20:09:51.769548  357835 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-449580" container
	I1017 20:09:51.769603  357835 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:09:51.781731  357835 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1017 20:09:51.789135  357835 cli_runner.go:164] Run: docker volume create no-preload-449580 --label name.minikube.sigs.k8s.io=no-preload-449580 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:09:51.791341  357835 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1017 20:09:51.792228  357835 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1017 20:09:51.792662  357835 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1017 20:09:51.812283  357835 oci.go:103] Successfully created a docker volume no-preload-449580
	I1017 20:09:51.812374  357835 cli_runner.go:164] Run: docker run --rm --name no-preload-449580-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-449580 --entrypoint /usr/bin/test -v no-preload-449580:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:09:51.863947  357835 cache.go:157] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1017 20:09:51.863982  357835 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 274.644774ms
	I1017 20:09:51.864012  357835 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1017 20:09:52.283459  357835 cache.go:157] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1017 20:09:52.283494  357835 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 694.253979ms
	I1017 20:09:52.283512  357835 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1017 20:09:52.288203  357835 oci.go:107] Successfully prepared a docker volume no-preload-449580
	I1017 20:09:52.288229  357835 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1017 20:09:52.288324  357835 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 20:09:52.288369  357835 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 20:09:52.288414  357835 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:09:52.359098  357835 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-449580 --name no-preload-449580 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-449580 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-449580 --network no-preload-449580 --ip 192.168.103.2 --volume no-preload-449580:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:09:52.696447  357835 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Running}}
	I1017 20:09:52.721226  357835 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:09:52.746434  357835 cli_runner.go:164] Run: docker exec no-preload-449580 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:09:52.802853  357835 oci.go:144] the created container "no-preload-449580" has a running status.
	I1017 20:09:52.802893  357835 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa...
	I1017 20:09:53.258170  357835 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:09:53.276004  357835 cache.go:157] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1017 20:09:53.276051  357835 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.686708648s
	I1017 20:09:53.276369  357835 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1017 20:09:53.297712  357835 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:09:53.323422  357835 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:09:53.323439  357835 kic_runner.go:114] Args: [docker exec --privileged no-preload-449580 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:09:53.339315  357835 cache.go:157] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1017 20:09:53.339685  357835 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.750027898s
	I1017 20:09:53.339716  357835 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1017 20:09:53.366278  357835 cache.go:157] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1017 20:09:53.366311  357835 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.777016868s
	I1017 20:09:53.366329  357835 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1017 20:09:53.386614  357835 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:09:53.393260  357835 cache.go:157] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1017 20:09:53.393323  357835 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.804082095s
	I1017 20:09:53.393362  357835 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1017 20:09:53.410345  357835 machine.go:93] provisionDockerMachine start ...
	I1017 20:09:53.410463  357835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:09:53.433476  357835 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:53.433771  357835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 20:09:53.433788  357835 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:09:53.578201  357835 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-449580
	
	I1017 20:09:53.578234  357835 ubuntu.go:182] provisioning hostname "no-preload-449580"
	I1017 20:09:53.578300  357835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:09:53.599449  357835 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:53.599655  357835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 20:09:53.599668  357835 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-449580 && echo "no-preload-449580" | sudo tee /etc/hostname
	I1017 20:09:53.708063  357835 cache.go:157] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1017 20:09:53.708098  357835 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.118776223s
	I1017 20:09:53.708115  357835 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1017 20:09:53.708150  357835 cache.go:87] Successfully saved all images to host disk.
	I1017 20:09:53.753531  357835 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-449580
	
	I1017 20:09:53.753616  357835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:09:53.772147  357835 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:53.772359  357835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 20:09:53.772380  357835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-449580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-449580/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-449580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:09:53.907720  357835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:09:53.907779  357835 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:09:53.907834  357835 ubuntu.go:190] setting up certificates
	I1017 20:09:53.907852  357835 provision.go:84] configureAuth start
	I1017 20:09:53.907917  357835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-449580
	I1017 20:09:53.925949  357835 provision.go:143] copyHostCerts
	I1017 20:09:53.926026  357835 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:09:53.926043  357835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:09:53.926129  357835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:09:53.926242  357835 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:09:53.926254  357835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:09:53.926297  357835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:09:53.926380  357835 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:09:53.926391  357835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:09:53.926431  357835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:09:53.926520  357835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.no-preload-449580 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-449580]
	I1017 20:09:54.254944  357835 provision.go:177] copyRemoteCerts
	I1017 20:09:54.255017  357835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:09:54.255053  357835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:09:54.273190  357835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:09:54.372869  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:09:54.395175  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:09:54.414409  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:09:54.433648  357835 provision.go:87] duration metric: took 525.777157ms to configureAuth
	I1017 20:09:54.433681  357835 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:09:54.433904  357835 config.go:182] Loaded profile config "no-preload-449580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:54.434025  357835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:09:54.451765  357835 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:54.451988  357835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 20:09:54.452005  357835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:09:54.709165  357835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:09:54.709199  357835 machine.go:96] duration metric: took 1.298819268s to provisionDockerMachine
	I1017 20:09:54.709213  357835 client.go:171] duration metric: took 3.075293263s to LocalClient.Create
	I1017 20:09:54.709248  357835 start.go:167] duration metric: took 3.075433501s to libmachine.API.Create "no-preload-449580"
	I1017 20:09:54.709263  357835 start.go:293] postStartSetup for "no-preload-449580" (driver="docker")
	I1017 20:09:54.709277  357835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:09:54.709355  357835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:09:54.709408  357835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:09:54.729814  357835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:09:54.830984  357835 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:09:54.834864  357835 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:09:54.834900  357835 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:09:54.834915  357835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:09:54.834975  357835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:09:54.835048  357835 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:09:54.835146  357835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:09:54.843813  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:09:54.866405  357835 start.go:296] duration metric: took 157.125649ms for postStartSetup
	I1017 20:09:54.866926  357835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-449580
	I1017 20:09:54.885301  357835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/config.json ...
	I1017 20:09:54.885595  357835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:09:54.885639  357835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:09:54.904613  357835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:09:55.001581  357835 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:09:55.006891  357835 start.go:128] duration metric: took 3.378258088s to createHost
	I1017 20:09:55.006920  357835 start.go:83] releasing machines lock for "no-preload-449580", held for 3.378402114s
	I1017 20:09:55.006996  357835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-449580
	I1017 20:09:55.026635  357835 ssh_runner.go:195] Run: cat /version.json
	I1017 20:09:55.026674  357835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:09:55.026689  357835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:09:55.026764  357835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:09:55.045375  357835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:09:55.046951  357835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:09:55.141446  357835 ssh_runner.go:195] Run: systemctl --version
	I1017 20:09:55.199379  357835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:09:55.235914  357835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:09:55.240893  357835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:09:55.240959  357835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:09:55.270224  357835 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 20:09:55.270257  357835 start.go:495] detecting cgroup driver to use...
	I1017 20:09:55.270295  357835 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:09:55.270353  357835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:09:55.287685  357835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:09:55.300868  357835 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:09:55.300936  357835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:09:55.318969  357835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:09:55.338438  357835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:09:55.427606  357835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:09:55.519077  357835 docker.go:234] disabling docker service ...
	I1017 20:09:55.519156  357835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:09:55.539448  357835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:09:55.553276  357835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:09:55.644482  357835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:09:55.737604  357835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:09:55.751633  357835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:09:55.767726  357835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:09:55.767805  357835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:55.780232  357835 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:09:55.780303  357835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:55.790204  357835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:55.800346  357835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:55.810338  357835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:09:55.819861  357835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:55.829753  357835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:55.844844  357835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:55.854699  357835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:09:55.863125  357835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:09:55.871366  357835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:09:55.953429  357835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:09:56.075708  357835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:09:56.075830  357835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:09:56.080308  357835 start.go:563] Will wait 60s for crictl version
	I1017 20:09:56.080371  357835 ssh_runner.go:195] Run: which crictl
	I1017 20:09:56.084567  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:09:56.109951  357835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:09:56.110043  357835 ssh_runner.go:195] Run: crio --version
	I1017 20:09:56.144526  357835 ssh_runner.go:195] Run: crio --version
	I1017 20:09:56.180909  357835 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:09:56.182796  357835 cli_runner.go:164] Run: docker network inspect no-preload-449580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:09:56.204346  357835 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1017 20:09:56.208645  357835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:09:56.221133  357835 kubeadm.go:883] updating cluster {Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:09:56.221249  357835 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:56.221297  357835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:09:56.249097  357835 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1017 20:09:56.249122  357835 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1017 20:09:56.249204  357835 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1017 20:09:56.249223  357835 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1017 20:09:56.249204  357835 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:09:56.249262  357835 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1017 20:09:56.249273  357835 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1017 20:09:56.249308  357835 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1017 20:09:56.249314  357835 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 20:09:56.249277  357835 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1017 20:09:56.250582  357835 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 20:09:56.250596  357835 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1017 20:09:56.250585  357835 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1017 20:09:56.250585  357835 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1017 20:09:56.250585  357835 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1017 20:09:56.250586  357835 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1017 20:09:56.250585  357835 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1017 20:09:56.250590  357835 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:09:56.367324  357835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1017 20:09:56.372555  357835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1017 20:09:56.375774  357835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1017 20:09:53.098322  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:53.098835  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:53.598342  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:53.598808  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:54.098436  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:54.098854  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:54.598304  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:54.598681  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:55.098903  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:55.099317  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:55.598825  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:55.599369  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:56.098965  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:56.099431  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:56.598912  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:56.599303  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:57.098906  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:57.099361  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:57.598897  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:57.599362  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:56.138993  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:56.638288  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:57.138943  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:57.638427  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:58.138232  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:58.638939  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:59.138976  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:59.638961  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:00.138734  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:00.638980  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:56.388631  357835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 20:09:56.389076  357835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1017 20:09:56.394705  357835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1017 20:09:56.406555  357835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1017 20:09:56.410514  357835 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1017 20:09:56.410571  357835 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1017 20:09:56.410638  357835 ssh_runner.go:195] Run: which crictl
	I1017 20:09:56.414994  357835 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1017 20:09:56.415060  357835 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1017 20:09:56.415108  357835 ssh_runner.go:195] Run: which crictl
	I1017 20:09:56.425476  357835 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1017 20:09:56.425531  357835 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1017 20:09:56.425580  357835 ssh_runner.go:195] Run: which crictl
	I1017 20:09:56.437969  357835 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1017 20:09:56.438020  357835 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 20:09:56.438083  357835 ssh_runner.go:195] Run: which crictl
	I1017 20:09:56.440438  357835 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1017 20:09:56.440488  357835 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1017 20:09:56.440535  357835 ssh_runner.go:195] Run: which crictl
	I1017 20:09:56.440838  357835 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1017 20:09:56.440875  357835 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1017 20:09:56.440916  357835 ssh_runner.go:195] Run: which crictl
	I1017 20:09:56.453348  357835 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1017 20:09:56.453395  357835 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1017 20:09:56.453446  357835 ssh_runner.go:195] Run: which crictl
	I1017 20:09:56.453464  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1017 20:09:56.453510  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1017 20:09:56.453522  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1017 20:09:56.453560  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 20:09:56.453611  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1017 20:09:56.453668  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1017 20:09:56.458931  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1017 20:09:56.493326  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1017 20:09:56.493373  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1017 20:09:56.493404  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1017 20:09:56.496559  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1017 20:09:56.496605  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1017 20:09:56.496695  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 20:09:56.496725  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1017 20:09:56.530602  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1017 20:09:56.532617  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1017 20:09:56.532617  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1017 20:09:56.533281  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1017 20:09:56.533409  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1017 20:09:56.538391  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 20:09:56.538423  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1017 20:09:56.570181  357835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1017 20:09:56.570301  357835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1017 20:09:56.574970  357835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1017 20:09:56.575015  357835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1017 20:09:56.575040  357835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1017 20:09:56.575086  357835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1017 20:09:56.575100  357835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1017 20:09:56.575104  357835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1017 20:09:56.575160  357835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1017 20:09:56.575172  357835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1017 20:09:56.578877  357835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1017 20:09:56.578942  357835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1017 20:09:56.578972  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1017 20:09:56.578980  357835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1017 20:09:56.578882  357835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1017 20:09:56.579068  357835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1017 20:09:56.584621  357835 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1017 20:09:56.584663  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1017 20:09:56.584678  357835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1017 20:09:56.584703  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1017 20:09:56.584735  357835 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1017 20:09:56.584782  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1017 20:09:56.585154  357835 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1017 20:09:56.585183  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1017 20:09:56.592815  357835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1017 20:09:56.592866  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1017 20:09:56.598090  357835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1017 20:09:56.598139  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1017 20:09:56.699028  357835 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1017 20:09:56.699112  357835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1017 20:09:57.238267  357835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1017 20:09:57.238330  357835 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1017 20:09:57.238375  357835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1017 20:09:57.966749  357835 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:09:58.368534  357835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.130132821s)
	I1017 20:09:58.368561  357835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1017 20:09:58.368596  357835 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1017 20:09:58.368640  357835 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1017 20:09:58.368663  357835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1017 20:09:58.368685  357835 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:09:58.368725  357835 ssh_runner.go:195] Run: which crictl
	I1017 20:09:59.560559  357835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.191866555s)
	I1017 20:09:59.560577  357835 ssh_runner.go:235] Completed: which crictl: (1.19183399s)
	I1017 20:09:59.560593  357835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1017 20:09:59.560623  357835 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1017 20:09:59.560637  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:09:59.560667  357835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1017 20:10:00.902615  357835 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.341945195s)
	I1017 20:10:00.902655  357835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.341960523s)
	I1017 20:10:00.902682  357835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1017 20:10:00.902708  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:10:00.902709  357835 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1017 20:10:00.902862  357835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1017 20:10:00.928942  357835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:09:58.098922  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:58.099347  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:58.598889  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:58.599334  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:59.098912  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:59.099358  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:09:59.598912  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:09:59.599397  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:00.098905  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:00.099401  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:00.598917  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:00.599423  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:01.098923  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:01.099333  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:01.598915  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:01.599418  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:02.098904  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:02.099388  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:02.598918  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:02.599363  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:01.139177  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:01.638986  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:02.139019  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:02.638880  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:03.138226  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:03.638910  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:04.138919  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:04.638880  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:05.138603  353504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:05.228700  353504 kubeadm.go:1113] duration metric: took 12.675735693s to wait for elevateKubeSystemPrivileges
	I1017 20:10:05.228766  353504 kubeadm.go:402] duration metric: took 23.854668859s to StartCluster
	I1017 20:10:05.228790  353504 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:05.228868  353504 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:10:05.230519  353504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:05.230775  353504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:10:05.230792  353504 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:10:05.230922  353504 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:10:05.230998  353504 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-726816"
	I1017 20:10:05.231034  353504 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-726816"
	I1017 20:10:05.231068  353504 host.go:66] Checking if "old-k8s-version-726816" exists ...
	I1017 20:10:05.231072  353504 config.go:182] Loaded profile config "old-k8s-version-726816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:10:05.231112  353504 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-726816"
	I1017 20:10:05.231127  353504 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-726816"
	I1017 20:10:05.231641  353504 cli_runner.go:164] Run: docker container inspect old-k8s-version-726816 --format={{.State.Status}}
	I1017 20:10:05.232110  353504 cli_runner.go:164] Run: docker container inspect old-k8s-version-726816 --format={{.State.Status}}
	I1017 20:10:05.233447  353504 out.go:179] * Verifying Kubernetes components...
	I1017 20:10:05.234967  353504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:10:05.260235  353504 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:10:05.261346  353504 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-726816"
	I1017 20:10:05.261406  353504 host.go:66] Checking if "old-k8s-version-726816" exists ...
	I1017 20:10:05.261943  353504 cli_runner.go:164] Run: docker container inspect old-k8s-version-726816 --format={{.State.Status}}
	I1017 20:10:05.262554  353504 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:05.262577  353504 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:10:05.262632  353504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-726816
	I1017 20:10:05.298504  353504 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:05.298543  353504 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:10:05.298607  353504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-726816
	I1017 20:10:05.301042  353504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/old-k8s-version-726816/id_rsa Username:docker}
	I1017 20:10:05.328642  353504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/old-k8s-version-726816/id_rsa Username:docker}
	I1017 20:10:05.370722  353504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:10:05.417293  353504 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:10:05.433415  353504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:05.466527  353504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:05.638271  353504 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1017 20:10:05.639564  353504 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-726816" to be "Ready" ...
	I1017 20:10:05.881016  353504 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 20:10:05.882672  353504 addons.go:514] duration metric: took 651.73946ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 20:10:02.061923  357835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.159034363s)
	I1017 20:10:02.061950  357835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1017 20:10:02.061991  357835 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1017 20:10:02.062089  357835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1017 20:10:02.061996  357835 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.133018784s)
	I1017 20:10:02.062181  357835 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1017 20:10:02.062271  357835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1017 20:10:03.479375  357835 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.417078726s)
	I1017 20:10:03.479417  357835 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1017 20:10:03.479377  357835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.417255685s)
	I1017 20:10:03.479438  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1017 20:10:03.479454  357835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1017 20:10:03.479481  357835 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1017 20:10:03.479554  357835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1017 20:10:03.098929  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:03.099395  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:03.598978  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:03.599431  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:04.099138  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:04.099623  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:04.599117  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:04.599653  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:05.098923  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:05.099369  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:05.598810  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:05.599245  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:06.098954  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:06.099449  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:06.598700  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:06.599181  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:07.098913  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:07.099405  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:07.598901  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:07.599394  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:06.145787  353504 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-726816" context rescaled to 1 replicas
	W1017 20:10:07.642835  353504 node_ready.go:57] node "old-k8s-version-726816" has "Ready":"False" status (will retry)
	W1017 20:10:09.644463  353504 node_ready.go:57] node "old-k8s-version-726816" has "Ready":"False" status (will retry)
	I1017 20:10:07.110258  357835 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.630677088s)
	I1017 20:10:07.110298  357835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1017 20:10:07.110331  357835 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1017 20:10:07.110377  357835 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1017 20:10:07.859627  357835 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1017 20:10:07.859670  357835 cache_images.go:124] Successfully loaded all cached images
	I1017 20:10:07.859677  357835 cache_images.go:93] duration metric: took 11.610539422s to LoadCachedImages
	I1017 20:10:07.859694  357835 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1017 20:10:07.859855  357835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-449580 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:10:07.859960  357835 ssh_runner.go:195] Run: crio config
	I1017 20:10:07.926406  357835 cni.go:84] Creating CNI manager for ""
	I1017 20:10:07.926434  357835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:10:07.926456  357835 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:10:07.926485  357835 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-449580 NodeName:no-preload-449580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:10:07.926679  357835 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-449580"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:10:07.926770  357835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:10:07.938663  357835 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1017 20:10:07.938732  357835 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1017 20:10:07.951121  357835 binary.go:77] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1017 20:10:07.951213  357835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1017 20:10:07.951268  357835 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/bin/linux/amd64/v1.34.1/kubelet
	I1017 20:10:07.951390  357835 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/bin/linux/amd64/v1.34.1/kubeadm
	I1017 20:10:07.957003  357835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1017 20:10:07.957055  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/cache/bin/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1017 20:10:08.689776  357835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:10:08.705291  357835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1017 20:10:08.710007  357835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1017 20:10:08.710046  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/cache/bin/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1017 20:10:08.771091  357835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1017 20:10:08.778067  357835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1017 20:10:08.778105  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/cache/bin/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1017 20:10:09.082640  357835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:10:09.091863  357835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 20:10:09.107051  357835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:10:09.198228  357835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1017 20:10:09.213989  357835 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:10:09.219556  357835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:10:09.264151  357835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:10:09.359472  357835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:10:09.384019  357835 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580 for IP: 192.168.103.2
	I1017 20:10:09.384047  357835 certs.go:195] generating shared ca certs ...
	I1017 20:10:09.384070  357835 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:09.384254  357835 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:10:09.384317  357835 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:10:09.384332  357835 certs.go:257] generating profile certs ...
	I1017 20:10:09.384413  357835 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.key
	I1017 20:10:09.384432  357835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt with IP's: []
	I1017 20:10:09.566911  357835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt ...
	I1017 20:10:09.566945  357835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt: {Name:mk7f9b50e525ee5464fadb94f53f1ba1441d9c9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:09.567172  357835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.key ...
	I1017 20:10:09.567198  357835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.key: {Name:mk3d2bbd7235ccd62d2fd524abc8b89d6e2445b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:09.567349  357835 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.key.15dab988
	I1017 20:10:09.567372  357835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.crt.15dab988 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1017 20:10:10.059456  357835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.crt.15dab988 ...
	I1017 20:10:10.059489  357835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.crt.15dab988: {Name:mkcfb1ca0d01937ad678096a772ca15abb87c7c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:10.059673  357835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.key.15dab988 ...
	I1017 20:10:10.059690  357835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.key.15dab988: {Name:mkcf4c9a55680b81da86ed706b26db8c22bb3850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:10.059794  357835 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.crt.15dab988 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.crt
	I1017 20:10:10.059895  357835 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.key.15dab988 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.key
	I1017 20:10:10.059962  357835 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.key
	I1017 20:10:10.059978  357835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.crt with IP's: []
	I1017 20:10:10.337380  357835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.crt ...
	I1017 20:10:10.337421  357835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.crt: {Name:mk20a75d684316e64ccda82654254c5913d72a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:10.337643  357835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.key ...
	I1017 20:10:10.337663  357835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.key: {Name:mk5525edc666400cffd308fdedb7ab796a28480b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:10.337941  357835 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:10:10.337993  357835 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:10:10.338006  357835 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:10:10.338032  357835 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:10:10.338059  357835 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:10:10.338090  357835 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:10:10.338133  357835 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:10:10.338713  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:10:10.359174  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:10:10.381596  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:10:10.401620  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:10:10.421857  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 20:10:10.441606  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:10:10.461526  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:10:10.482861  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:10:10.503951  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:10:10.526775  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:10:10.546989  357835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:10:10.567357  357835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:10:10.582109  357835 ssh_runner.go:195] Run: openssl version
	I1017 20:10:10.589264  357835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:10:10.599333  357835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:10:10.603770  357835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:10:10.603848  357835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:10:10.640883  357835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:10:10.652034  357835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:10:10.664005  357835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:10:10.669337  357835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:10:10.669418  357835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:10:10.706938  357835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:10:10.717384  357835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:10:10.728457  357835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:10:10.735117  357835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:10:10.735197  357835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:10:10.773558  357835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:10:10.783406  357835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:10:10.787585  357835 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:10:10.787651  357835 kubeadm.go:400] StartCluster: {Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:10:10.787819  357835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:10:10.787877  357835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:10:10.818112  357835 cri.go:89] found id: ""
	I1017 20:10:10.818188  357835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:10:10.827288  357835 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:10:10.836067  357835 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:10:10.836117  357835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:10:10.844931  357835 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:10:10.844957  357835 kubeadm.go:157] found existing configuration files:
	
	I1017 20:10:10.845002  357835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:10:10.853788  357835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:10:10.853848  357835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:10:10.863349  357835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:10:10.872118  357835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:10:10.872192  357835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:10:10.881323  357835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:10:10.890455  357835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:10:10.890532  357835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:10:10.899972  357835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:10:10.910081  357835 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:10:10.910139  357835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:10:10.919634  357835 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:10:10.958599  357835 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:10:10.958665  357835 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:10:10.980651  357835 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:10:10.980800  357835 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 20:10:10.980877  357835 kubeadm.go:318] OS: Linux
	I1017 20:10:10.980973  357835 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:10:10.981046  357835 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:10:10.981095  357835 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:10:10.981176  357835 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:10:10.981225  357835 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:10:10.981295  357835 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:10:10.981370  357835 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:10:10.981411  357835 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 20:10:11.040855  357835 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:10:11.041003  357835 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:10:11.041247  357835 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:10:11.054923  357835 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 20:10:11.058986  357835 out.go:252]   - Generating certificates and keys ...
	I1017 20:10:11.059088  357835 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:10:11.059146  357835 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:10:08.098890  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:08.099356  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:08.599081  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:08.599540  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:10:09.098983  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:10:09.099099  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:10:09.128533  344862 cri.go:89] found id: "46e55200e4a629222eac612a0be109923a69f60b679400cf04ebc08c61c55e1e"
	I1017 20:10:09.128560  344862 cri.go:89] found id: ""
	I1017 20:10:09.128581  344862 logs.go:282] 1 containers: [46e55200e4a629222eac612a0be109923a69f60b679400cf04ebc08c61c55e1e]
	I1017 20:10:09.128635  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:10:09.132900  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:10:09.132969  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:10:09.163447  344862 cri.go:89] found id: ""
	I1017 20:10:09.163476  344862 logs.go:282] 0 containers: []
	W1017 20:10:09.163484  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:10:09.163491  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:10:09.163548  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:10:09.191044  344862 cri.go:89] found id: ""
	I1017 20:10:09.191084  344862 logs.go:282] 0 containers: []
	W1017 20:10:09.191096  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:10:09.191104  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:10:09.191161  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:10:09.221314  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:10:09.221335  344862 cri.go:89] found id: ""
	I1017 20:10:09.221343  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:10:09.221406  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:10:09.225547  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:10:09.225612  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:10:09.251719  344862 cri.go:89] found id: ""
	I1017 20:10:09.251790  344862 logs.go:282] 0 containers: []
	W1017 20:10:09.251800  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:10:09.251806  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:10:09.251868  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:10:09.280706  344862 cri.go:89] found id: "6b0fc791d1df8bb62880718597ecd5695e9509884a79d376041c741c5aedb06f"
	I1017 20:10:09.280733  344862 cri.go:89] found id: ""
	I1017 20:10:09.280755  344862 logs.go:282] 1 containers: [6b0fc791d1df8bb62880718597ecd5695e9509884a79d376041c741c5aedb06f]
	I1017 20:10:09.280815  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:10:09.285210  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:10:09.285270  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:10:09.318403  344862 cri.go:89] found id: ""
	I1017 20:10:09.318427  344862 logs.go:282] 0 containers: []
	W1017 20:10:09.318434  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:10:09.318440  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:10:09.318499  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:10:09.348827  344862 cri.go:89] found id: ""
	I1017 20:10:09.348864  344862 logs.go:282] 0 containers: []
	W1017 20:10:09.348875  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:10:09.348888  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:10:09.348912  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:10:09.371007  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:10:09.371055  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:10:09.441948  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:10:09.441971  344862 logs.go:123] Gathering logs for kube-apiserver [46e55200e4a629222eac612a0be109923a69f60b679400cf04ebc08c61c55e1e] ...
	I1017 20:10:09.441987  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46e55200e4a629222eac612a0be109923a69f60b679400cf04ebc08c61c55e1e"
	I1017 20:10:09.477498  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:10:09.477543  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:10:09.523140  344862 logs.go:123] Gathering logs for kube-controller-manager [6b0fc791d1df8bb62880718597ecd5695e9509884a79d376041c741c5aedb06f] ...
	I1017 20:10:09.523183  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b0fc791d1df8bb62880718597ecd5695e9509884a79d376041c741c5aedb06f"
	I1017 20:10:09.551903  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:10:09.551935  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:10:09.591477  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:10:09.591522  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:10:09.625205  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:10:09.625240  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:10:12.196042  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:10:11.531830  357835 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:10:11.546401  357835 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:10:11.765842  357835 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:10:12.003381  357835 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:10:12.158630  357835 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:10:12.158813  357835 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-449580] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1017 20:10:12.536593  357835 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:10:12.536821  357835 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-449580] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1017 20:10:12.676795  357835 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:10:13.051186  357835 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:10:13.239853  357835 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:10:13.239947  357835 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:10:13.547911  357835 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:10:14.108064  357835 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:10:14.342668  357835 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:10:14.511067  357835 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:10:14.708565  357835 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:10:14.709183  357835 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:10:14.713266  357835 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1017 20:10:12.144821  353504 node_ready.go:57] node "old-k8s-version-726816" has "Ready":"False" status (will retry)
	W1017 20:10:14.144975  353504 node_ready.go:57] node "old-k8s-version-726816" has "Ready":"False" status (will retry)
	I1017 20:10:14.715461  357835 out.go:252]   - Booting up control plane ...
	I1017 20:10:14.715572  357835 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:10:14.715700  357835 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:10:14.715814  357835 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:10:14.730461  357835 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:10:14.730621  357835 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:10:14.739960  357835 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:10:14.740195  357835 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:10:14.740268  357835 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:10:14.844709  357835 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:10:14.844865  357835 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:10:15.846307  357835 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001494439s
	I1017 20:10:15.850888  357835 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:10:15.851016  357835 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1017 20:10:15.851184  357835 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:10:15.851284  357835 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 20:10:17.196860  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1017 20:10:17.196924  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:10:17.196996  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:10:17.231484  344862 cri.go:89] found id: "924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	I1017 20:10:17.231518  344862 cri.go:89] found id: "46e55200e4a629222eac612a0be109923a69f60b679400cf04ebc08c61c55e1e"
	I1017 20:10:17.231527  344862 cri.go:89] found id: ""
	I1017 20:10:17.231538  344862 logs.go:282] 2 containers: [924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709 46e55200e4a629222eac612a0be109923a69f60b679400cf04ebc08c61c55e1e]
	I1017 20:10:17.231605  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:10:17.236259  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:10:17.241056  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:10:17.241130  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:10:17.276829  344862 cri.go:89] found id: ""
	I1017 20:10:17.276865  344862 logs.go:282] 0 containers: []
	W1017 20:10:17.276878  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:10:17.276887  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:10:17.276976  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:10:17.310905  344862 cri.go:89] found id: ""
	I1017 20:10:17.310931  344862 logs.go:282] 0 containers: []
	W1017 20:10:17.310938  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:10:17.310954  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:10:17.311002  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:10:17.345302  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:10:17.345327  344862 cri.go:89] found id: ""
	I1017 20:10:17.345340  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:10:17.345403  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:10:17.350678  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:10:17.350785  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:10:17.384651  344862 cri.go:89] found id: ""
	I1017 20:10:17.384684  344862 logs.go:282] 0 containers: []
	W1017 20:10:17.384697  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:10:17.384706  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:10:17.384786  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:10:17.420378  344862 cri.go:89] found id: "6b0fc791d1df8bb62880718597ecd5695e9509884a79d376041c741c5aedb06f"
	I1017 20:10:17.420407  344862 cri.go:89] found id: ""
	I1017 20:10:17.420417  344862 logs.go:282] 1 containers: [6b0fc791d1df8bb62880718597ecd5695e9509884a79d376041c741c5aedb06f]
	I1017 20:10:17.420479  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:10:17.426149  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:10:17.426286  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:10:17.457401  344862 cri.go:89] found id: ""
	I1017 20:10:17.457428  344862 logs.go:282] 0 containers: []
	W1017 20:10:17.457438  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:10:17.457447  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:10:17.457512  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:10:17.487872  344862 cri.go:89] found id: ""
	I1017 20:10:17.487909  344862 logs.go:282] 0 containers: []
	W1017 20:10:17.487921  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:10:17.487941  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:10:17.487957  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:10:17.525761  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:10:17.525797  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:10:17.592170  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:10:17.592206  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1017 20:10:16.948900  357835 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.097814406s
	I1017 20:10:17.914214  357835 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.063514748s
	I1017 20:10:19.853343  357835 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002304749s
	I1017 20:10:19.865440  357835 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:10:19.878322  357835 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:10:19.890698  357835 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:10:19.891009  357835 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-449580 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:10:19.900493  357835 kubeadm.go:318] [bootstrap-token] Using token: e61jsv.sf9nvmjtf9jxnih4
	W1017 20:10:16.145486  353504 node_ready.go:57] node "old-k8s-version-726816" has "Ready":"False" status (will retry)
	W1017 20:10:18.643049  353504 node_ready.go:57] node "old-k8s-version-726816" has "Ready":"False" status (will retry)
	I1017 20:10:19.145140  353504 node_ready.go:49] node "old-k8s-version-726816" is "Ready"
	I1017 20:10:19.145176  353504 node_ready.go:38] duration metric: took 13.505561135s for node "old-k8s-version-726816" to be "Ready" ...
	I1017 20:10:19.145194  353504 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:10:19.145258  353504 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:10:19.160587  353504 api_server.go:72] duration metric: took 13.929755903s to wait for apiserver process to appear ...
	I1017 20:10:19.160621  353504 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:10:19.160668  353504 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1017 20:10:19.165894  353504 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1017 20:10:19.167227  353504 api_server.go:141] control plane version: v1.28.0
	I1017 20:10:19.167257  353504 api_server.go:131] duration metric: took 6.626796ms to wait for apiserver health ...
	I1017 20:10:19.167267  353504 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:10:19.173596  353504 system_pods.go:59] 8 kube-system pods found
	I1017 20:10:19.173719  353504 system_pods.go:61] "coredns-5dd5756b68-xrnvz" [ea96a948-87ea-4887-a10e-2b89f19622ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:10:19.174095  353504 system_pods.go:61] "etcd-old-k8s-version-726816" [8419f84a-61e7-4f24-a464-117a44f08b48] Running
	I1017 20:10:19.174123  353504 system_pods.go:61] "kindnet-9slhm" [ce4b307a-d88f-4893-a7bb-e6a84d2209f7] Running
	I1017 20:10:19.174131  353504 system_pods.go:61] "kube-apiserver-old-k8s-version-726816" [0b1e1d8e-b9ee-4dbf-92a3-ba9624571808] Running
	I1017 20:10:19.174138  353504 system_pods.go:61] "kube-controller-manager-old-k8s-version-726816" [ebb02742-9cc3-4f5a-954c-ff17bd664efd] Running
	I1017 20:10:19.174144  353504 system_pods.go:61] "kube-proxy-xp229" [903311f3-63f5-48b7-a27e-a1f75bb62639] Running
	I1017 20:10:19.174150  353504 system_pods.go:61] "kube-scheduler-old-k8s-version-726816" [d4ad40e5-9206-4285-970e-27113005fe52] Running
	I1017 20:10:19.174165  353504 system_pods.go:61] "storage-provisioner" [a7436816-47a1-4ede-b38d-0fa1a36a8981] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:10:19.174177  353504 system_pods.go:74] duration metric: took 6.902432ms to wait for pod list to return data ...
	I1017 20:10:19.174193  353504 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:10:19.177130  353504 default_sa.go:45] found service account: "default"
	I1017 20:10:19.177158  353504 default_sa.go:55] duration metric: took 2.94536ms for default service account to be created ...
	I1017 20:10:19.177166  353504 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:10:19.181139  353504 system_pods.go:86] 8 kube-system pods found
	I1017 20:10:19.181177  353504 system_pods.go:89] "coredns-5dd5756b68-xrnvz" [ea96a948-87ea-4887-a10e-2b89f19622ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:10:19.181186  353504 system_pods.go:89] "etcd-old-k8s-version-726816" [8419f84a-61e7-4f24-a464-117a44f08b48] Running
	I1017 20:10:19.181194  353504 system_pods.go:89] "kindnet-9slhm" [ce4b307a-d88f-4893-a7bb-e6a84d2209f7] Running
	I1017 20:10:19.181200  353504 system_pods.go:89] "kube-apiserver-old-k8s-version-726816" [0b1e1d8e-b9ee-4dbf-92a3-ba9624571808] Running
	I1017 20:10:19.181207  353504 system_pods.go:89] "kube-controller-manager-old-k8s-version-726816" [ebb02742-9cc3-4f5a-954c-ff17bd664efd] Running
	I1017 20:10:19.181213  353504 system_pods.go:89] "kube-proxy-xp229" [903311f3-63f5-48b7-a27e-a1f75bb62639] Running
	I1017 20:10:19.181222  353504 system_pods.go:89] "kube-scheduler-old-k8s-version-726816" [d4ad40e5-9206-4285-970e-27113005fe52] Running
	I1017 20:10:19.181229  353504 system_pods.go:89] "storage-provisioner" [a7436816-47a1-4ede-b38d-0fa1a36a8981] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:10:19.181255  353504 retry.go:31] will retry after 275.033835ms: missing components: kube-dns
	I1017 20:10:19.462906  353504 system_pods.go:86] 8 kube-system pods found
	I1017 20:10:19.462952  353504 system_pods.go:89] "coredns-5dd5756b68-xrnvz" [ea96a948-87ea-4887-a10e-2b89f19622ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:10:19.462962  353504 system_pods.go:89] "etcd-old-k8s-version-726816" [8419f84a-61e7-4f24-a464-117a44f08b48] Running
	I1017 20:10:19.462970  353504 system_pods.go:89] "kindnet-9slhm" [ce4b307a-d88f-4893-a7bb-e6a84d2209f7] Running
	I1017 20:10:19.462975  353504 system_pods.go:89] "kube-apiserver-old-k8s-version-726816" [0b1e1d8e-b9ee-4dbf-92a3-ba9624571808] Running
	I1017 20:10:19.462981  353504 system_pods.go:89] "kube-controller-manager-old-k8s-version-726816" [ebb02742-9cc3-4f5a-954c-ff17bd664efd] Running
	I1017 20:10:19.462986  353504 system_pods.go:89] "kube-proxy-xp229" [903311f3-63f5-48b7-a27e-a1f75bb62639] Running
	I1017 20:10:19.462991  353504 system_pods.go:89] "kube-scheduler-old-k8s-version-726816" [d4ad40e5-9206-4285-970e-27113005fe52] Running
	I1017 20:10:19.463014  353504 system_pods.go:89] "storage-provisioner" [a7436816-47a1-4ede-b38d-0fa1a36a8981] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:10:19.463033  353504 retry.go:31] will retry after 328.88499ms: missing components: kube-dns
	I1017 20:10:19.796375  353504 system_pods.go:86] 8 kube-system pods found
	I1017 20:10:19.796404  353504 system_pods.go:89] "coredns-5dd5756b68-xrnvz" [ea96a948-87ea-4887-a10e-2b89f19622ec] Running
	I1017 20:10:19.796409  353504 system_pods.go:89] "etcd-old-k8s-version-726816" [8419f84a-61e7-4f24-a464-117a44f08b48] Running
	I1017 20:10:19.796412  353504 system_pods.go:89] "kindnet-9slhm" [ce4b307a-d88f-4893-a7bb-e6a84d2209f7] Running
	I1017 20:10:19.796415  353504 system_pods.go:89] "kube-apiserver-old-k8s-version-726816" [0b1e1d8e-b9ee-4dbf-92a3-ba9624571808] Running
	I1017 20:10:19.796419  353504 system_pods.go:89] "kube-controller-manager-old-k8s-version-726816" [ebb02742-9cc3-4f5a-954c-ff17bd664efd] Running
	I1017 20:10:19.796422  353504 system_pods.go:89] "kube-proxy-xp229" [903311f3-63f5-48b7-a27e-a1f75bb62639] Running
	I1017 20:10:19.796425  353504 system_pods.go:89] "kube-scheduler-old-k8s-version-726816" [d4ad40e5-9206-4285-970e-27113005fe52] Running
	I1017 20:10:19.796428  353504 system_pods.go:89] "storage-provisioner" [a7436816-47a1-4ede-b38d-0fa1a36a8981] Running
	I1017 20:10:19.796437  353504 system_pods.go:126] duration metric: took 619.264781ms to wait for k8s-apps to be running ...
	I1017 20:10:19.796444  353504 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:10:19.796489  353504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:10:19.810138  353504 system_svc.go:56] duration metric: took 13.680296ms WaitForService to wait for kubelet
	I1017 20:10:19.810173  353504 kubeadm.go:586] duration metric: took 14.579350278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:10:19.810189  353504 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:10:19.813524  353504 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:10:19.813552  353504 node_conditions.go:123] node cpu capacity is 8
	I1017 20:10:19.813567  353504 node_conditions.go:105] duration metric: took 3.373698ms to run NodePressure ...
	I1017 20:10:19.813579  353504 start.go:241] waiting for startup goroutines ...
	I1017 20:10:19.813585  353504 start.go:246] waiting for cluster config update ...
	I1017 20:10:19.813595  353504 start.go:255] writing updated cluster config ...
	I1017 20:10:19.813937  353504 ssh_runner.go:195] Run: rm -f paused
	I1017 20:10:19.818194  353504 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:10:19.822943  353504 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xrnvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:19.828170  353504 pod_ready.go:94] pod "coredns-5dd5756b68-xrnvz" is "Ready"
	I1017 20:10:19.828205  353504 pod_ready.go:86] duration metric: took 5.238313ms for pod "coredns-5dd5756b68-xrnvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:19.831245  353504 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:19.836124  353504 pod_ready.go:94] pod "etcd-old-k8s-version-726816" is "Ready"
	I1017 20:10:19.836154  353504 pod_ready.go:86] duration metric: took 4.875656ms for pod "etcd-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:19.839147  353504 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:19.843790  353504 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-726816" is "Ready"
	I1017 20:10:19.843820  353504 pod_ready.go:86] duration metric: took 4.644104ms for pod "kube-apiserver-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:19.846548  353504 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:20.223052  353504 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-726816" is "Ready"
	I1017 20:10:20.223081  353504 pod_ready.go:86] duration metric: took 376.507802ms for pod "kube-controller-manager-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:20.423668  353504 pod_ready.go:83] waiting for pod "kube-proxy-xp229" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:20.822791  353504 pod_ready.go:94] pod "kube-proxy-xp229" is "Ready"
	I1017 20:10:20.822818  353504 pod_ready.go:86] duration metric: took 399.116295ms for pod "kube-proxy-xp229" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:19.902540  357835 out.go:252]   - Configuring RBAC rules ...
	I1017 20:10:19.902692  357835 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:10:19.905538  357835 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:10:19.912004  357835 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:10:19.914761  357835 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:10:19.918386  357835 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:10:19.921127  357835 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:10:20.259718  357835 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:10:20.677321  357835 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:10:21.259470  357835 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:10:21.260217  357835 kubeadm.go:318] 
	I1017 20:10:21.260330  357835 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:10:21.260341  357835 kubeadm.go:318] 
	I1017 20:10:21.260453  357835 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:10:21.260464  357835 kubeadm.go:318] 
	I1017 20:10:21.260527  357835 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:10:21.260623  357835 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:10:21.260703  357835 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:10:21.260713  357835 kubeadm.go:318] 
	I1017 20:10:21.260812  357835 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:10:21.260824  357835 kubeadm.go:318] 
	I1017 20:10:21.260890  357835 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:10:21.260901  357835 kubeadm.go:318] 
	I1017 20:10:21.260988  357835 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:10:21.261108  357835 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:10:21.261214  357835 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:10:21.261232  357835 kubeadm.go:318] 
	I1017 20:10:21.261369  357835 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:10:21.261492  357835 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:10:21.261514  357835 kubeadm.go:318] 
	I1017 20:10:21.261599  357835 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token e61jsv.sf9nvmjtf9jxnih4 \
	I1017 20:10:21.261722  357835 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 \
	I1017 20:10:21.261771  357835 kubeadm.go:318] 	--control-plane 
	I1017 20:10:21.261777  357835 kubeadm.go:318] 
	I1017 20:10:21.261899  357835 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:10:21.261910  357835 kubeadm.go:318] 
	I1017 20:10:21.262029  357835 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token e61jsv.sf9nvmjtf9jxnih4 \
	I1017 20:10:21.262174  357835 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 
	I1017 20:10:21.264239  357835 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 20:10:21.264398  357835 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:10:21.264433  357835 cni.go:84] Creating CNI manager for ""
	I1017 20:10:21.264446  357835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:10:21.267608  357835 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:10:21.269089  357835 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:10:21.273898  357835 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:10:21.273919  357835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:10:21.288926  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:10:21.023698  353504 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:21.422822  353504 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-726816" is "Ready"
	I1017 20:10:21.422851  353504 pod_ready.go:86] duration metric: took 399.126875ms for pod "kube-scheduler-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:21.422866  353504 pod_ready.go:40] duration metric: took 1.604628168s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:10:21.480185  353504 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1017 20:10:21.482358  353504 out.go:203] 
	W1017 20:10:21.484289  353504 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1017 20:10:21.486039  353504 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1017 20:10:21.488009  353504 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-726816" cluster and "default" namespace by default
	I1017 20:10:21.544943  357835 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:10:21.545087  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:21.545173  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-449580 minikube.k8s.io/updated_at=2025_10_17T20_10_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=no-preload-449580 minikube.k8s.io/primary=true
	I1017 20:10:21.562386  357835 ops.go:34] apiserver oom_adj: -16
	I1017 20:10:21.635259  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:22.135978  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:22.635613  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:23.135683  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:23.635789  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:24.135421  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:24.635472  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:25.135626  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:25.635976  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:26.136177  357835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:26.205635  357835 kubeadm.go:1113] duration metric: took 4.660597034s to wait for elevateKubeSystemPrivileges
	I1017 20:10:26.205674  357835 kubeadm.go:402] duration metric: took 15.418029064s to StartCluster
	I1017 20:10:26.205707  357835 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:26.205803  357835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:10:26.207069  357835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:26.207330  357835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:10:26.207327  357835 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:10:26.207423  357835 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:10:26.207516  357835 addons.go:69] Setting storage-provisioner=true in profile "no-preload-449580"
	I1017 20:10:26.207536  357835 addons.go:238] Setting addon storage-provisioner=true in "no-preload-449580"
	I1017 20:10:26.207539  357835 config.go:182] Loaded profile config "no-preload-449580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:10:26.207540  357835 addons.go:69] Setting default-storageclass=true in profile "no-preload-449580"
	I1017 20:10:26.207571  357835 host.go:66] Checking if "no-preload-449580" exists ...
	I1017 20:10:26.207577  357835 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-449580"
	I1017 20:10:26.207933  357835 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:10:26.208098  357835 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:10:26.211942  357835 out.go:179] * Verifying Kubernetes components...
	I1017 20:10:26.213559  357835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:10:26.232069  357835 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:10:26.232681  357835 addons.go:238] Setting addon default-storageclass=true in "no-preload-449580"
	I1017 20:10:26.232728  357835 host.go:66] Checking if "no-preload-449580" exists ...
	I1017 20:10:26.233160  357835 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:10:26.233786  357835 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:26.233806  357835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:10:26.233867  357835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:10:26.261868  357835 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:26.261896  357835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:10:26.262118  357835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:10:26.262199  357835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:10:26.286081  357835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:10:26.305801  357835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:10:26.370695  357835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:10:26.374869  357835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:26.402010  357835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:26.498040  357835 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1017 20:10:26.499500  357835 node_ready.go:35] waiting up to 6m0s for node "no-preload-449580" to be "Ready" ...
	I1017 20:10:26.699088  357835 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 17 20:10:19 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:19.263460787Z" level=info msg="Starting container: 3cf29fe8762185a91a54c7d60dd11ad4c4ea68dbd8d1e466a5f2a3ea89358922" id=bf90bed4-2863-46f5-bcc3-c9fd95c9ba0f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:10:19 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:19.265492084Z" level=info msg="Started container" PID=2184 containerID=3cf29fe8762185a91a54c7d60dd11ad4c4ea68dbd8d1e466a5f2a3ea89358922 description=kube-system/coredns-5dd5756b68-xrnvz/coredns id=bf90bed4-2863-46f5-bcc3-c9fd95c9ba0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=250f7e2244c28dafbfa1e25c6f588287f30d721c40869efe2cf39905ce7520a6
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.04457272Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8400b07e-641e-4677-826e-c7d850dbc474 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.044666527Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.049677913Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e968ceafc373aff3ad3398a3edca858767aa531f2c338501237781643bf66bb1 UID:d45668ea-4755-40f0-8901-dc50444939c7 NetNS:/var/run/netns/4ef27217-093b-45ce-8ae9-7d8207b57efb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00088eca8}] Aliases:map[]}"
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.049709337Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.059834415Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e968ceafc373aff3ad3398a3edca858767aa531f2c338501237781643bf66bb1 UID:d45668ea-4755-40f0-8901-dc50444939c7 NetNS:/var/run/netns/4ef27217-093b-45ce-8ae9-7d8207b57efb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00088eca8}] Aliases:map[]}"
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.060016165Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.060865555Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.062185056Z" level=info msg="Ran pod sandbox e968ceafc373aff3ad3398a3edca858767aa531f2c338501237781643bf66bb1 with infra container: default/busybox/POD" id=8400b07e-641e-4677-826e-c7d850dbc474 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.063456465Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=587087b9-80b5-436e-b478-bdb1110fe6eb name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.06356726Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=587087b9-80b5-436e-b478-bdb1110fe6eb name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.063597171Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=587087b9-80b5-436e-b478-bdb1110fe6eb name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.064128185Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a103f2f3-7f65-43ad-9a49-577b1acf38a9 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:10:22 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:22.065699388Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 20:10:24 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:24.238410471Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=a103f2f3-7f65-43ad-9a49-577b1acf38a9 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:10:24 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:24.239440528Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3953a25d-38d6-401e-8472-89a9d6903d6c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:24 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:24.241188033Z" level=info msg="Creating container: default/busybox/busybox" id=6a720a8e-8921-48b2-b801-52cc3bdf2953 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:10:24 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:24.242106127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:24 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:24.246650749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:24 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:24.247242543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:24 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:24.274275647Z" level=info msg="Created container 07a125b08bea1480cb9d37d758c3341fceb9a82c0b30fae039e29ae6fb262bef: default/busybox/busybox" id=6a720a8e-8921-48b2-b801-52cc3bdf2953 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:10:24 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:24.274935779Z" level=info msg="Starting container: 07a125b08bea1480cb9d37d758c3341fceb9a82c0b30fae039e29ae6fb262bef" id=efe8ccf1-a327-4e11-af20-24a85ca084ae name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:10:24 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:24.27676091Z" level=info msg="Started container" PID=2256 containerID=07a125b08bea1480cb9d37d758c3341fceb9a82c0b30fae039e29ae6fb262bef description=default/busybox/busybox id=efe8ccf1-a327-4e11-af20-24a85ca084ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=e968ceafc373aff3ad3398a3edca858767aa531f2c338501237781643bf66bb1
	Oct 17 20:10:29 old-k8s-version-726816 crio[779]: time="2025-10-17T20:10:29.827012427Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	07a125b08bea1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   e968ceafc373a       busybox                                          default
	3cf29fe876218       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   250f7e2244c28       coredns-5dd5756b68-xrnvz                         kube-system
	f7b02947a0c93       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   7318c85967445       storage-provisioner                              kube-system
	19261f4327d40       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   5c90dc4647edc       kindnet-9slhm                                    kube-system
	3de77987bc42a       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   c4b6bcd3b3796       kube-proxy-xp229                                 kube-system
	b9f2f2b3f2814       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   e3add46641220       kube-apiserver-old-k8s-version-726816            kube-system
	6945df3dab9b2       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   34b48bed7cca2       kube-scheduler-old-k8s-version-726816            kube-system
	79620bb4f87c4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   10201be1fffc7       etcd-old-k8s-version-726816                      kube-system
	50724c1196407       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   9e360af2c64d5       kube-controller-manager-old-k8s-version-726816   kube-system
	
	
	==> coredns [3cf29fe8762185a91a54c7d60dd11ad4c4ea68dbd8d1e466a5f2a3ea89358922] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33571 - 2069 "HINFO IN 408257871005396100.2114794965811214292. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.080924934s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-726816
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-726816
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=old-k8s-version-726816
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_09_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:09:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-726816
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:10:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:10:22 +0000   Fri, 17 Oct 2025 20:09:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:10:22 +0000   Fri, 17 Oct 2025 20:09:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:10:22 +0000   Fri, 17 Oct 2025 20:09:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:10:22 +0000   Fri, 17 Oct 2025 20:10:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-726816
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                239cdd26-1e67-40fc-a3aa-17a6bcadd5b2
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-xrnvz                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-726816                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-9slhm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-726816             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-726816    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-xp229                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-726816             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x9 over 45s)  kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node old-k8s-version-726816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-726816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-726816 event: Registered Node old-k8s-version-726816 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-726816 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [79620bb4f87c41fc62cd0281ae85964eda904d6b6482522bd0cb7fbf164470da] <==
	{"level":"info","ts":"2025-10-17T20:09:46.798549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-10-17T20:09:46.798683Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-10-17T20:09:46.800419Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-17T20:09:46.800635Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T20:09:46.800668Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T20:09:46.80077Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-17T20:09:46.800783Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-17T20:09:46.987682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-17T20:09:46.987731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-17T20:09:46.987792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-10-17T20:09:46.987808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-10-17T20:09:46.987814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-10-17T20:09:46.987821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-10-17T20:09:46.987828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-10-17T20:09:46.988909Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-726816 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T20:09:46.988904Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:09:46.988946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:09:46.988921Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:09:46.989131Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T20:09:46.989152Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-17T20:09:46.990463Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:09:46.990585Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:09:46.99061Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:09:46.991045Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-10-17T20:09:46.991239Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:10:31 up  1:52,  0 user,  load average: 4.62, 3.69, 2.30
	Linux old-k8s-version-726816 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19261f4327d400b31b044b7d4a0895bc10e11b45d442345e43a6889bf45822d2] <==
	I1017 20:10:08.533468       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:10:08.533862       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1017 20:10:08.534014       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:10:08.534030       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:10:08.534052       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:10:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:10:08.738042       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:10:08.792569       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:10:08.792636       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:10:08.833927       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:10:09.093279       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:10:09.093344       1 metrics.go:72] Registering metrics
	I1017 20:10:09.093418       1 controller.go:711] "Syncing nftables rules"
	I1017 20:10:18.745881       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:10:18.745941       1 main.go:301] handling current node
	I1017 20:10:28.741085       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:10:28.741136       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b9f2f2b3f2814ec9b146510092b7c8bd4d112234851b5114c33cb0447b2f2c7d] <==
	I1017 20:09:48.421701       1 shared_informer.go:318] Caches are synced for configmaps
	I1017 20:09:48.421771       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1017 20:09:48.421799       1 aggregator.go:166] initial CRD sync complete...
	I1017 20:09:48.421810       1 autoregister_controller.go:141] Starting autoregister controller
	I1017 20:09:48.421815       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:09:48.421823       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:09:48.422207       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1017 20:09:48.423356       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 20:09:48.450306       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:09:48.455071       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1017 20:09:49.326677       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:09:49.330656       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:09:49.330676       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:09:49.815814       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:09:49.855865       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:09:49.941610       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:09:49.947776       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1017 20:09:49.948922       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 20:09:49.953197       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:09:50.360513       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 20:09:51.540620       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 20:09:51.554910       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:09:51.566913       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1017 20:10:04.889581       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1017 20:10:05.039299       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [50724c1196407ade9357ed1f7aea8a8a62a26ca0c1b12d60a2893994bbdacbc9] <==
	I1017 20:10:04.394877       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 20:10:04.434917       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1017 20:10:04.485188       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1017 20:10:04.489569       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 20:10:04.818945       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:10:04.886225       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:10:04.886264       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 20:10:04.901886       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9slhm"
	I1017 20:10:04.901915       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xp229"
	I1017 20:10:05.045477       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1017 20:10:05.324135       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nqjr9"
	I1017 20:10:05.340487       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xrnvz"
	I1017 20:10:05.353043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="308.2462ms"
	I1017 20:10:05.362461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.272948ms"
	I1017 20:10:05.362813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.107µs"
	I1017 20:10:05.683165       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1017 20:10:05.698389       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-nqjr9"
	I1017 20:10:05.707615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.337823ms"
	I1017 20:10:05.720977       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.294906ms"
	I1017 20:10:05.721244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.612µs"
	I1017 20:10:18.900464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.132µs"
	I1017 20:10:18.913786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="141.04µs"
	I1017 20:10:19.225498       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1017 20:10:19.739599       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.776122ms"
	I1017 20:10:19.739724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.847µs"
	
	
	==> kube-proxy [3de77987bc42a26f26df8c452a0e7e936ff5d9d1825b972abef18794bf3f54ea] <==
	I1017 20:10:05.730273       1 server_others.go:69] "Using iptables proxy"
	I1017 20:10:05.740817       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1017 20:10:05.768133       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:10:05.771378       1 server_others.go:152] "Using iptables Proxier"
	I1017 20:10:05.771428       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 20:10:05.771441       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 20:10:05.771475       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 20:10:05.771837       1 server.go:846] "Version info" version="v1.28.0"
	I1017 20:10:05.771860       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:10:05.773413       1 config.go:97] "Starting endpoint slice config controller"
	I1017 20:10:05.773671       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 20:10:05.773481       1 config.go:188] "Starting service config controller"
	I1017 20:10:05.773964       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 20:10:05.773547       1 config.go:315] "Starting node config controller"
	I1017 20:10:05.774223       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 20:10:05.874444       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1017 20:10:05.875613       1 shared_informer.go:318] Caches are synced for node config
	I1017 20:10:05.875629       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6945df3dab9b2241845c5f8e52b147c2ffa8b06be5f2b2c334d21691462b55ec] <==
	W1017 20:09:48.382511       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1017 20:09:48.382533       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1017 20:09:48.382606       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1017 20:09:48.382622       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1017 20:09:48.382647       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1017 20:09:48.382677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1017 20:09:49.219601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1017 20:09:49.219636       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1017 20:09:49.259165       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1017 20:09:49.259203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1017 20:09:49.422646       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1017 20:09:49.422685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1017 20:09:49.452024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1017 20:09:49.452057       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1017 20:09:49.489910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1017 20:09:49.489945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1017 20:09:49.505289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1017 20:09:49.505329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1017 20:09:49.531781       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1017 20:09:49.531814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1017 20:09:49.600607       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1017 20:09:49.600637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1017 20:09:49.731758       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1017 20:09:49.731795       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1017 20:09:52.679027       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 20:10:04 old-k8s-version-726816 kubelet[1414]: I1017 20:10:04.224973    1414 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 20:10:04 old-k8s-version-726816 kubelet[1414]: I1017 20:10:04.225875    1414 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 20:10:04 old-k8s-version-726816 kubelet[1414]: I1017 20:10:04.985017    1414 topology_manager.go:215] "Topology Admit Handler" podUID="ce4b307a-d88f-4893-a7bb-e6a84d2209f7" podNamespace="kube-system" podName="kindnet-9slhm"
	Oct 17 20:10:04 old-k8s-version-726816 kubelet[1414]: I1017 20:10:04.986543    1414 topology_manager.go:215] "Topology Admit Handler" podUID="903311f3-63f5-48b7-a27e-a1f75bb62639" podNamespace="kube-system" podName="kube-proxy-xp229"
	Oct 17 20:10:05 old-k8s-version-726816 kubelet[1414]: I1017 20:10:05.182683    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ce4b307a-d88f-4893-a7bb-e6a84d2209f7-cni-cfg\") pod \"kindnet-9slhm\" (UID: \"ce4b307a-d88f-4893-a7bb-e6a84d2209f7\") " pod="kube-system/kindnet-9slhm"
	Oct 17 20:10:05 old-k8s-version-726816 kubelet[1414]: I1017 20:10:05.182781    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/903311f3-63f5-48b7-a27e-a1f75bb62639-xtables-lock\") pod \"kube-proxy-xp229\" (UID: \"903311f3-63f5-48b7-a27e-a1f75bb62639\") " pod="kube-system/kube-proxy-xp229"
	Oct 17 20:10:05 old-k8s-version-726816 kubelet[1414]: I1017 20:10:05.182820    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv8kw\" (UniqueName: \"kubernetes.io/projected/903311f3-63f5-48b7-a27e-a1f75bb62639-kube-api-access-hv8kw\") pod \"kube-proxy-xp229\" (UID: \"903311f3-63f5-48b7-a27e-a1f75bb62639\") " pod="kube-system/kube-proxy-xp229"
	Oct 17 20:10:05 old-k8s-version-726816 kubelet[1414]: I1017 20:10:05.182854    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce4b307a-d88f-4893-a7bb-e6a84d2209f7-lib-modules\") pod \"kindnet-9slhm\" (UID: \"ce4b307a-d88f-4893-a7bb-e6a84d2209f7\") " pod="kube-system/kindnet-9slhm"
	Oct 17 20:10:05 old-k8s-version-726816 kubelet[1414]: I1017 20:10:05.182882    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/903311f3-63f5-48b7-a27e-a1f75bb62639-kube-proxy\") pod \"kube-proxy-xp229\" (UID: \"903311f3-63f5-48b7-a27e-a1f75bb62639\") " pod="kube-system/kube-proxy-xp229"
	Oct 17 20:10:05 old-k8s-version-726816 kubelet[1414]: I1017 20:10:05.183051    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/903311f3-63f5-48b7-a27e-a1f75bb62639-lib-modules\") pod \"kube-proxy-xp229\" (UID: \"903311f3-63f5-48b7-a27e-a1f75bb62639\") " pod="kube-system/kube-proxy-xp229"
	Oct 17 20:10:05 old-k8s-version-726816 kubelet[1414]: I1017 20:10:05.183103    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce4b307a-d88f-4893-a7bb-e6a84d2209f7-xtables-lock\") pod \"kindnet-9slhm\" (UID: \"ce4b307a-d88f-4893-a7bb-e6a84d2209f7\") " pod="kube-system/kindnet-9slhm"
	Oct 17 20:10:05 old-k8s-version-726816 kubelet[1414]: I1017 20:10:05.183136    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f945v\" (UniqueName: \"kubernetes.io/projected/ce4b307a-d88f-4893-a7bb-e6a84d2209f7-kube-api-access-f945v\") pod \"kindnet-9slhm\" (UID: \"ce4b307a-d88f-4893-a7bb-e6a84d2209f7\") " pod="kube-system/kindnet-9slhm"
	Oct 17 20:10:08 old-k8s-version-726816 kubelet[1414]: I1017 20:10:08.701683    1414 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9slhm" podStartSLOduration=2.12790322 podCreationTimestamp="2025-10-17 20:10:04 +0000 UTC" firstStartedPulling="2025-10-17 20:10:05.606575764 +0000 UTC m=+14.100339040" lastFinishedPulling="2025-10-17 20:10:08.18029573 +0000 UTC m=+16.674059016" observedRunningTime="2025-10-17 20:10:08.701571606 +0000 UTC m=+17.195334895" watchObservedRunningTime="2025-10-17 20:10:08.701623196 +0000 UTC m=+17.195386481"
	Oct 17 20:10:08 old-k8s-version-726816 kubelet[1414]: I1017 20:10:08.701896    1414 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xp229" podStartSLOduration=4.7018616810000005 podCreationTimestamp="2025-10-17 20:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:10:06.698363178 +0000 UTC m=+15.192126464" watchObservedRunningTime="2025-10-17 20:10:08.701861681 +0000 UTC m=+17.195624967"
	Oct 17 20:10:18 old-k8s-version-726816 kubelet[1414]: I1017 20:10:18.872625    1414 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 17 20:10:18 old-k8s-version-726816 kubelet[1414]: I1017 20:10:18.900674    1414 topology_manager.go:215] "Topology Admit Handler" podUID="ea96a948-87ea-4887-a10e-2b89f19622ec" podNamespace="kube-system" podName="coredns-5dd5756b68-xrnvz"
	Oct 17 20:10:18 old-k8s-version-726816 kubelet[1414]: I1017 20:10:18.900937    1414 topology_manager.go:215] "Topology Admit Handler" podUID="a7436816-47a1-4ede-b38d-0fa1a36a8981" podNamespace="kube-system" podName="storage-provisioner"
	Oct 17 20:10:19 old-k8s-version-726816 kubelet[1414]: I1017 20:10:19.080955    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a7436816-47a1-4ede-b38d-0fa1a36a8981-tmp\") pod \"storage-provisioner\" (UID: \"a7436816-47a1-4ede-b38d-0fa1a36a8981\") " pod="kube-system/storage-provisioner"
	Oct 17 20:10:19 old-k8s-version-726816 kubelet[1414]: I1017 20:10:19.081028    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea96a948-87ea-4887-a10e-2b89f19622ec-config-volume\") pod \"coredns-5dd5756b68-xrnvz\" (UID: \"ea96a948-87ea-4887-a10e-2b89f19622ec\") " pod="kube-system/coredns-5dd5756b68-xrnvz"
	Oct 17 20:10:19 old-k8s-version-726816 kubelet[1414]: I1017 20:10:19.081160    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdclz\" (UniqueName: \"kubernetes.io/projected/a7436816-47a1-4ede-b38d-0fa1a36a8981-kube-api-access-bdclz\") pod \"storage-provisioner\" (UID: \"a7436816-47a1-4ede-b38d-0fa1a36a8981\") " pod="kube-system/storage-provisioner"
	Oct 17 20:10:19 old-k8s-version-726816 kubelet[1414]: I1017 20:10:19.081234    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8ws5\" (UniqueName: \"kubernetes.io/projected/ea96a948-87ea-4887-a10e-2b89f19622ec-kube-api-access-c8ws5\") pod \"coredns-5dd5756b68-xrnvz\" (UID: \"ea96a948-87ea-4887-a10e-2b89f19622ec\") " pod="kube-system/coredns-5dd5756b68-xrnvz"
	Oct 17 20:10:19 old-k8s-version-726816 kubelet[1414]: I1017 20:10:19.721042    1414 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.720982464 podCreationTimestamp="2025-10-17 20:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:10:19.720928949 +0000 UTC m=+28.214692233" watchObservedRunningTime="2025-10-17 20:10:19.720982464 +0000 UTC m=+28.214745781"
	Oct 17 20:10:19 old-k8s-version-726816 kubelet[1414]: I1017 20:10:19.732011    1414 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xrnvz" podStartSLOduration=14.731969535 podCreationTimestamp="2025-10-17 20:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:10:19.731704015 +0000 UTC m=+28.225467301" watchObservedRunningTime="2025-10-17 20:10:19.731969535 +0000 UTC m=+28.225732820"
	Oct 17 20:10:21 old-k8s-version-726816 kubelet[1414]: I1017 20:10:21.742655    1414 topology_manager.go:215] "Topology Admit Handler" podUID="d45668ea-4755-40f0-8901-dc50444939c7" podNamespace="default" podName="busybox"
	Oct 17 20:10:21 old-k8s-version-726816 kubelet[1414]: I1017 20:10:21.898380    1414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjsj9\" (UniqueName: \"kubernetes.io/projected/d45668ea-4755-40f0-8901-dc50444939c7-kube-api-access-gjsj9\") pod \"busybox\" (UID: \"d45668ea-4755-40f0-8901-dc50444939c7\") " pod="default/busybox"
	
	
	==> storage-provisioner [f7b02947a0c93f7cb176da9f65192f45b1f8ca88f611e7a526c57f03d23f26a7] <==
	I1017 20:10:19.271313       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:10:19.284103       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:10:19.284273       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1017 20:10:19.294484       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:10:19.294828       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eaf327a4-eed2-4b18-a7d0-89913f7f259a", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-726816_c92d9f87-36df-4621-96d6-ba92630c6e2f became leader
	I1017 20:10:19.295484       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-726816_c92d9f87-36df-4621-96d6-ba92630c6e2f!
	I1017 20:10:19.396351       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-726816_c92d9f87-36df-4621-96d6-ba92630c6e2f!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-726816 -n old-k8s-version-726816
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-726816 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-449580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-449580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (249.9608ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:10:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-449580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-449580 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-449580 describe deploy/metrics-server -n kube-system: exit status 1 (67.3512ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-449580 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-449580
helpers_test.go:243: (dbg) docker inspect no-preload-449580:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb",
	        "Created": "2025-10-17T20:09:52.380878563Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 358363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:09:52.434492243Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/hosts",
	        "LogPath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb-json.log",
	        "Name": "/no-preload-449580",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-449580:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-449580",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb",
	                "LowerDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-449580",
	                "Source": "/var/lib/docker/volumes/no-preload-449580/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-449580",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-449580",
	                "name.minikube.sigs.k8s.io": "no-preload-449580",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ad13636c23ad11018c3351e1a431391e556a2b710806db652de74223d14ab578",
	            "SandboxKey": "/var/run/docker/netns/ad13636c23ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-449580": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:4b:a0:4d:51:ae",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b82ebd045e12b91841d651f11549608344307c54224bf0d85f675490a33cca93",
	                    "EndpointID": "6eee93a199d157dc33833b17f2646086eec5c83cc821e6192a20f5e7a6447bd1",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-449580",
	                        "11713a3ef64d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449580 -n no-preload-449580
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-449580 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-449580 logs -n 25: (1.031446165s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-684669 sudo containerd config dump                                                                                                                                                                                                  │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ ssh     │ -p cilium-684669 sudo crio config                                                                                                                                                                                                             │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ delete  │ -p cilium-684669                                                                                                                                                                                                                              │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p running-upgrade-097245 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                                          │ running-upgrade-097245    │ jenkins │ v1.32.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p force-systemd-env-834947                                                                                                                                                                                                                   │ force-systemd-env-834947  │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p cert-expiration-202048 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-202048    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p missing-upgrade-159057 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-159057    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ stop    │ -p kubernetes-upgrade-660693                                                                                                                                                                                                                  │ kubernetes-upgrade-660693 │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-660693 │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ start   │ -p running-upgrade-097245 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p missing-upgrade-159057                                                                                                                                                                                                                     │ missing-upgrade-159057    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p force-systemd-flag-599050 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p running-upgrade-097245                                                                                                                                                                                                                     │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ ssh     │ force-systemd-flag-599050 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p force-systemd-flag-599050                                                                                                                                                                                                                  │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-726816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p old-k8s-version-726816 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-726816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-449580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:10:48
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:10:48.390782  365613 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:10:48.391109  365613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:10:48.391122  365613 out.go:374] Setting ErrFile to fd 2...
	I1017 20:10:48.391128  365613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:10:48.391401  365613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:10:48.392018  365613 out.go:368] Setting JSON to false
	I1017 20:10:48.393480  365613 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6796,"bootTime":1760725052,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:10:48.393602  365613 start.go:141] virtualization: kvm guest
	I1017 20:10:48.396242  365613 out.go:179] * [old-k8s-version-726816] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:10:48.398547  365613 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:10:48.398547  365613 notify.go:220] Checking for updates...
	I1017 20:10:48.401873  365613 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:10:48.403323  365613 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:10:48.404662  365613 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:10:48.406158  365613 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:10:48.407496  365613 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:10:48.409532  365613 config.go:182] Loaded profile config "old-k8s-version-726816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:10:48.411723  365613 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1017 20:10:48.414638  365613 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:10:48.444014  365613 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:10:48.444157  365613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:10:48.505999  365613 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-17 20:10:48.495461733 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:10:48.506129  365613 docker.go:318] overlay module found
	I1017 20:10:48.508254  365613 out.go:179] * Using the docker driver based on existing profile
	I1017 20:10:48.510034  365613 start.go:305] selected driver: docker
	I1017 20:10:48.510057  365613 start.go:925] validating driver "docker" against &{Name:old-k8s-version-726816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-726816 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:10:48.510171  365613 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:10:48.510807  365613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:10:48.570786  365613 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-17 20:10:48.560174665 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:10:48.571295  365613 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:10:48.571327  365613 cni.go:84] Creating CNI manager for ""
	I1017 20:10:48.571387  365613 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:10:48.571442  365613 start.go:349] cluster config:
	{Name:old-k8s-version-726816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-726816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:10:48.573709  365613 out.go:179] * Starting "old-k8s-version-726816" primary control-plane node in "old-k8s-version-726816" cluster
	I1017 20:10:48.575355  365613 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:10:48.576925  365613 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:10:48.578547  365613 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 20:10:48.578609  365613 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1017 20:10:48.578628  365613 cache.go:58] Caching tarball of preloaded images
	I1017 20:10:48.578691  365613 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:10:48.578780  365613 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:10:48.578797  365613 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1017 20:10:48.578938  365613 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/old-k8s-version-726816/config.json ...
	I1017 20:10:48.600995  365613 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:10:48.601014  365613 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:10:48.601033  365613 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:10:48.601065  365613 start.go:360] acquireMachinesLock for old-k8s-version-726816: {Name:mk07817ba05ca04e1036109d8af317379e2f232e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:10:48.601156  365613 start.go:364] duration metric: took 47.422µs to acquireMachinesLock for "old-k8s-version-726816"
	I1017 20:10:48.601181  365613 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:10:48.601188  365613 fix.go:54] fixHost starting: 
	I1017 20:10:48.601502  365613 cli_runner.go:164] Run: docker container inspect old-k8s-version-726816 --format={{.State.Status}}
	I1017 20:10:48.620162  365613 fix.go:112] recreateIfNeeded on old-k8s-version-726816: state=Stopped err=<nil>
	W1017 20:10:48.620206  365613 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 17 20:10:40 no-preload-449580 crio[766]: time="2025-10-17T20:10:40.05739091Z" level=info msg="Starting container: 0a9916a9066b1c72072a35b3861467bfedfb6d25f27dc7d900500deaceff2b2e" id=cc19b692-e46d-4dc4-9dbd-c213f2d7b51e name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:10:40 no-preload-449580 crio[766]: time="2025-10-17T20:10:40.059776887Z" level=info msg="Started container" PID=2926 containerID=0a9916a9066b1c72072a35b3861467bfedfb6d25f27dc7d900500deaceff2b2e description=kube-system/coredns-66bc5c9577-p4n86/coredns id=cc19b692-e46d-4dc4-9dbd-c213f2d7b51e name=/runtime.v1.RuntimeService/StartContainer sandboxID=707957f46406c920b9fce642028484045fb3eba77ee696dcdff3895f02febce8
	Oct 17 20:10:42 no-preload-449580 crio[766]: time="2025-10-17T20:10:42.987517399Z" level=info msg="Running pod sandbox: default/busybox/POD" id=488470d5-14e2-4c98-996e-5a5039446d1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:10:42 no-preload-449580 crio[766]: time="2025-10-17T20:10:42.987612807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:42 no-preload-449580 crio[766]: time="2025-10-17T20:10:42.993133625Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:810664b784f828f7119e4c41a05d0adf1600da43d67c6561d4ca60778e73ead7 UID:f84fa1c2-b435-4d7e-8356-a847e5291ee8 NetNS:/var/run/netns/63f849c0-e062-4e85-9f23-f610464c3547 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008af70}] Aliases:map[]}"
	Oct 17 20:10:42 no-preload-449580 crio[766]: time="2025-10-17T20:10:42.993175008Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 20:10:43 no-preload-449580 crio[766]: time="2025-10-17T20:10:43.003854833Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:810664b784f828f7119e4c41a05d0adf1600da43d67c6561d4ca60778e73ead7 UID:f84fa1c2-b435-4d7e-8356-a847e5291ee8 NetNS:/var/run/netns/63f849c0-e062-4e85-9f23-f610464c3547 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008af70}] Aliases:map[]}"
	Oct 17 20:10:43 no-preload-449580 crio[766]: time="2025-10-17T20:10:43.004014578Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 20:10:43 no-preload-449580 crio[766]: time="2025-10-17T20:10:43.004859445Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 20:10:43 no-preload-449580 crio[766]: time="2025-10-17T20:10:43.005632198Z" level=info msg="Ran pod sandbox 810664b784f828f7119e4c41a05d0adf1600da43d67c6561d4ca60778e73ead7 with infra container: default/busybox/POD" id=488470d5-14e2-4c98-996e-5a5039446d1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:10:43 no-preload-449580 crio[766]: time="2025-10-17T20:10:43.006940091Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=de024888-e43b-4046-bcb2-bb6110f3997c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:43 no-preload-449580 crio[766]: time="2025-10-17T20:10:43.007072482Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=de024888-e43b-4046-bcb2-bb6110f3997c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:43 no-preload-449580 crio[766]: time="2025-10-17T20:10:43.007107567Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=de024888-e43b-4046-bcb2-bb6110f3997c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:43 no-preload-449580 crio[766]: time="2025-10-17T20:10:43.007715664Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1adcb865-44af-44ba-b8ca-d66901daa861 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:10:43 no-preload-449580 crio[766]: time="2025-10-17T20:10:43.009142631Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 20:10:45 no-preload-449580 crio[766]: time="2025-10-17T20:10:45.018058412Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1adcb865-44af-44ba-b8ca-d66901daa861 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:10:45 no-preload-449580 crio[766]: time="2025-10-17T20:10:45.018695199Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fd706efb-ade0-453d-b3c2-3d2ff29c0a99 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:45 no-preload-449580 crio[766]: time="2025-10-17T20:10:45.020287762Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=887c7670-dd01-4617-98d4-e4388ba6f57b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:45 no-preload-449580 crio[766]: time="2025-10-17T20:10:45.024629176Z" level=info msg="Creating container: default/busybox/busybox" id=39b97525-fe91-479b-b1c9-2167f1512fa8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:10:45 no-preload-449580 crio[766]: time="2025-10-17T20:10:45.025580076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:45 no-preload-449580 crio[766]: time="2025-10-17T20:10:45.030359768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:45 no-preload-449580 crio[766]: time="2025-10-17T20:10:45.030955301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:45 no-preload-449580 crio[766]: time="2025-10-17T20:10:45.059062626Z" level=info msg="Created container b4e444ccf759bbf06918ef77eb38b8568cee5fa5ad73b3d1634b1f91b7b528f5: default/busybox/busybox" id=39b97525-fe91-479b-b1c9-2167f1512fa8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:10:45 no-preload-449580 crio[766]: time="2025-10-17T20:10:45.059832537Z" level=info msg="Starting container: b4e444ccf759bbf06918ef77eb38b8568cee5fa5ad73b3d1634b1f91b7b528f5" id=bd545990-29e6-46f5-a1f8-03e6c7c8d01b name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:10:45 no-preload-449580 crio[766]: time="2025-10-17T20:10:45.061905213Z" level=info msg="Started container" PID=3002 containerID=b4e444ccf759bbf06918ef77eb38b8568cee5fa5ad73b3d1634b1f91b7b528f5 description=default/busybox/busybox id=bd545990-29e6-46f5-a1f8-03e6c7c8d01b name=/runtime.v1.RuntimeService/StartContainer sandboxID=810664b784f828f7119e4c41a05d0adf1600da43d67c6561d4ca60778e73ead7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b4e444ccf759b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   810664b784f82       busybox                                     default
	0a9916a9066b1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   707957f46406c       coredns-66bc5c9577-p4n86                    kube-system
	234dda6ffc9fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   70c82d1792063       storage-provisioner                         kube-system
	9a7a72585477f       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   fa0c1f7612c0c       kindnet-9xg9h                               kube-system
	f4c2c08c7f45e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      26 seconds ago      Running             kube-proxy                0                   38c67d4887f11       kube-proxy-m5g7f                            kube-system
	658ea8b552060       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   d996c35e1b5c0       kube-controller-manager-no-preload-449580   kube-system
	9b0ed86af7949       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   69fab006b4aac       kube-scheduler-no-preload-449580            kube-system
	6f1de2d144282       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   3f6d931005118       kube-apiserver-no-preload-449580            kube-system
	62fb551ebc039       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   b228794ac219a       etcd-no-preload-449580                      kube-system
	
	
	==> coredns [0a9916a9066b1c72072a35b3861467bfedfb6d25f27dc7d900500deaceff2b2e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56382 - 33966 "HINFO IN 6571739802319599513.4853765982744784515. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.10171365s
	
	
	==> describe nodes <==
	Name:               no-preload-449580
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-449580
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=no-preload-449580
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_10_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:10:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-449580
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:10:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:10:51 +0000   Fri, 17 Oct 2025 20:10:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:10:51 +0000   Fri, 17 Oct 2025 20:10:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:10:51 +0000   Fri, 17 Oct 2025 20:10:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:10:51 +0000   Fri, 17 Oct 2025 20:10:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-449580
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                95a628c4-6711-4ed7-bc23-3a2b6d436bf1
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-p4n86                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-449580                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-9xg9h                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-449580             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-449580    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-m5g7f                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-449580             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node no-preload-449580 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node no-preload-449580 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node no-preload-449580 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node no-preload-449580 event: Registered Node no-preload-449580 in Controller
	  Normal  NodeReady                13s   kubelet          Node no-preload-449580 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [62fb551ebc039252249b8700bb3090c81b08b3bdb5f414dfd122382407ab55ea] <==
	{"level":"warn","ts":"2025-10-17T20:10:17.200985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.207629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.214397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.221443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.231589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.246263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.253029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.261778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.268865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.284508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.291766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.299342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.306871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.314221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.321619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.328871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.344367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.351691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.358520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.366169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.374378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.386527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.394437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.401954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:17.458891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38404","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:10:52 up  1:53,  0 user,  load average: 3.79, 3.55, 2.28
	Linux no-preload-449580 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9a7a72585477f70a89e774383066c804c59fbe5923cfc6054aee368b9e4a0c22] <==
	I1017 20:10:28.901906       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:10:28.902197       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 20:10:28.902345       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:10:28.902359       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:10:28.902383       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:10:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:10:29.185108       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:10:29.185163       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:10:29.185179       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:10:29.185526       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:10:29.600646       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:10:29.600679       1 metrics.go:72] Registering metrics
	I1017 20:10:29.600793       1 controller.go:711] "Syncing nftables rules"
	I1017 20:10:39.186593       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:10:39.186681       1 main.go:301] handling current node
	I1017 20:10:49.186873       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:10:49.186929       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6f1de2d144282ffafeb25765ab1640c7f4923c684e37a0640348901f3877b5d0] <==
	I1017 20:10:17.980426       1 controller.go:667] quota admission added evaluator for: namespaces
	E1017 20:10:18.021369       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1017 20:10:18.029397       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:10:18.059050       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:10:18.059125       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 20:10:18.068510       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:10:18.068897       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:10:18.857405       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:10:18.862685       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:10:18.862711       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:10:19.440841       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:10:19.485383       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:10:19.588133       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:10:19.594638       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1017 20:10:19.595708       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:10:19.600388       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:10:19.887390       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:10:20.665480       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:10:20.676332       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:10:20.685729       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 20:10:25.589076       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 20:10:25.641144       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:10:25.791085       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:10:25.796641       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1017 20:10:50.772356       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:47460: use of closed network connection
	
	
	==> kube-controller-manager [658ea8b552060bec272f6701399d573545493569ebadf8584e4b4270228e2f4a] <==
	I1017 20:10:24.887085       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:10:24.887106       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:10:24.887130       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:10:24.887136       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:10:24.887156       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:10:24.887197       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 20:10:24.887268       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:10:24.887319       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:10:24.887331       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:10:24.887335       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:10:24.887978       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 20:10:24.888164       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:10:24.890392       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:10:24.891363       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:10:24.891418       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:10:24.891466       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:10:24.891478       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:10:24.891485       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:10:24.893669       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:10:24.897695       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:10:24.897910       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:10:24.898044       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-449580" podCIDRs=["10.244.0.0/24"]
	I1017 20:10:24.905274       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:10:24.914634       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:10:39.887929       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f4c2c08c7f45e3fac5cce3f8f24c182298d44b035c0e801ca1cf4cc4b62a99c1] <==
	I1017 20:10:26.008116       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:10:26.066790       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:10:26.167266       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:10:26.167302       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 20:10:26.167391       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:10:26.187022       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:10:26.187091       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:10:26.193471       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:10:26.193879       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:10:26.193913       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:10:26.195864       1 config.go:200] "Starting service config controller"
	I1017 20:10:26.195922       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:10:26.195943       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:10:26.195957       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:10:26.196169       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:10:26.196178       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:10:26.196608       1 config.go:309] "Starting node config controller"
	I1017 20:10:26.196620       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:10:26.196634       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:10:26.296225       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:10:26.296245       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 20:10:26.296279       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9b0ed86af7949bb3e7fd38addcb414cf9e13684cd44c9871eba3c847c56d8c09] <==
	E1017 20:10:17.911959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:10:17.911977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:10:17.912061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:10:17.912056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:10:17.912118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:10:17.912686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:10:17.912756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:10:17.912775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:10:17.912820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:10:17.912956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:10:17.912966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:10:18.889385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:10:18.908918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:10:18.965265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:10:19.019768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:10:19.060235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:10:19.116561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:10:19.119938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:10:19.131216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:10:19.143669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:10:19.170718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:10:19.226666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:10:19.237053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:10:19.280914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1017 20:10:21.209721       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:10:21 no-preload-449580 kubelet[2309]: I1017 20:10:21.581703    2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-449580" podStartSLOduration=1.581646221 podStartE2EDuration="1.581646221s" podCreationTimestamp="2025-10-17 20:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:10:21.569541033 +0000 UTC m=+1.154746561" watchObservedRunningTime="2025-10-17 20:10:21.581646221 +0000 UTC m=+1.166851746"
	Oct 17 20:10:21 no-preload-449580 kubelet[2309]: I1017 20:10:21.592663    2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-449580" podStartSLOduration=2.592640254 podStartE2EDuration="2.592640254s" podCreationTimestamp="2025-10-17 20:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:10:21.583611807 +0000 UTC m=+1.168817335" watchObservedRunningTime="2025-10-17 20:10:21.592640254 +0000 UTC m=+1.177845784"
	Oct 17 20:10:21 no-preload-449580 kubelet[2309]: I1017 20:10:21.605514    2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-449580" podStartSLOduration=3.605491893 podStartE2EDuration="3.605491893s" podCreationTimestamp="2025-10-17 20:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:10:21.593544434 +0000 UTC m=+1.178749961" watchObservedRunningTime="2025-10-17 20:10:21.605491893 +0000 UTC m=+1.190697422"
	Oct 17 20:10:21 no-preload-449580 kubelet[2309]: I1017 20:10:21.619167    2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-449580" podStartSLOduration=1.619148312 podStartE2EDuration="1.619148312s" podCreationTimestamp="2025-10-17 20:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:10:21.605833186 +0000 UTC m=+1.191038714" watchObservedRunningTime="2025-10-17 20:10:21.619148312 +0000 UTC m=+1.204353842"
	Oct 17 20:10:24 no-preload-449580 kubelet[2309]: I1017 20:10:24.928254    2309 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 20:10:24 no-preload-449580 kubelet[2309]: I1017 20:10:24.929010    2309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 20:10:25 no-preload-449580 kubelet[2309]: I1017 20:10:25.719859    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/673bfee2-dc28-4a9a-815e-0f57d9dd92f8-cni-cfg\") pod \"kindnet-9xg9h\" (UID: \"673bfee2-dc28-4a9a-815e-0f57d9dd92f8\") " pod="kube-system/kindnet-9xg9h"
	Oct 17 20:10:25 no-preload-449580 kubelet[2309]: I1017 20:10:25.719920    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blk2k\" (UniqueName: \"kubernetes.io/projected/673bfee2-dc28-4a9a-815e-0f57d9dd92f8-kube-api-access-blk2k\") pod \"kindnet-9xg9h\" (UID: \"673bfee2-dc28-4a9a-815e-0f57d9dd92f8\") " pod="kube-system/kindnet-9xg9h"
	Oct 17 20:10:25 no-preload-449580 kubelet[2309]: I1017 20:10:25.719969    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/673bfee2-dc28-4a9a-815e-0f57d9dd92f8-lib-modules\") pod \"kindnet-9xg9h\" (UID: \"673bfee2-dc28-4a9a-815e-0f57d9dd92f8\") " pod="kube-system/kindnet-9xg9h"
	Oct 17 20:10:25 no-preload-449580 kubelet[2309]: I1017 20:10:25.719997    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bngvv\" (UniqueName: \"kubernetes.io/projected/b0d544c6-f6c2-459c-93b9-22452c8a77d9-kube-api-access-bngvv\") pod \"kube-proxy-m5g7f\" (UID: \"b0d544c6-f6c2-459c-93b9-22452c8a77d9\") " pod="kube-system/kube-proxy-m5g7f"
	Oct 17 20:10:25 no-preload-449580 kubelet[2309]: I1017 20:10:25.720020    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/673bfee2-dc28-4a9a-815e-0f57d9dd92f8-xtables-lock\") pod \"kindnet-9xg9h\" (UID: \"673bfee2-dc28-4a9a-815e-0f57d9dd92f8\") " pod="kube-system/kindnet-9xg9h"
	Oct 17 20:10:25 no-preload-449580 kubelet[2309]: I1017 20:10:25.720038    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b0d544c6-f6c2-459c-93b9-22452c8a77d9-kube-proxy\") pod \"kube-proxy-m5g7f\" (UID: \"b0d544c6-f6c2-459c-93b9-22452c8a77d9\") " pod="kube-system/kube-proxy-m5g7f"
	Oct 17 20:10:25 no-preload-449580 kubelet[2309]: I1017 20:10:25.720063    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0d544c6-f6c2-459c-93b9-22452c8a77d9-xtables-lock\") pod \"kube-proxy-m5g7f\" (UID: \"b0d544c6-f6c2-459c-93b9-22452c8a77d9\") " pod="kube-system/kube-proxy-m5g7f"
	Oct 17 20:10:25 no-preload-449580 kubelet[2309]: I1017 20:10:25.720220    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0d544c6-f6c2-459c-93b9-22452c8a77d9-lib-modules\") pod \"kube-proxy-m5g7f\" (UID: \"b0d544c6-f6c2-459c-93b9-22452c8a77d9\") " pod="kube-system/kube-proxy-m5g7f"
	Oct 17 20:10:26 no-preload-449580 kubelet[2309]: I1017 20:10:26.545611    2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m5g7f" podStartSLOduration=1.545587958 podStartE2EDuration="1.545587958s" podCreationTimestamp="2025-10-17 20:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:10:26.545096926 +0000 UTC m=+6.130302456" watchObservedRunningTime="2025-10-17 20:10:26.545587958 +0000 UTC m=+6.130793488"
	Oct 17 20:10:29 no-preload-449580 kubelet[2309]: I1017 20:10:29.553476    2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9xg9h" podStartSLOduration=1.830385871 podStartE2EDuration="4.553461381s" podCreationTimestamp="2025-10-17 20:10:25 +0000 UTC" firstStartedPulling="2025-10-17 20:10:25.923235949 +0000 UTC m=+5.508441456" lastFinishedPulling="2025-10-17 20:10:28.646311442 +0000 UTC m=+8.231516966" observedRunningTime="2025-10-17 20:10:29.553180604 +0000 UTC m=+9.138386138" watchObservedRunningTime="2025-10-17 20:10:29.553461381 +0000 UTC m=+9.138666909"
	Oct 17 20:10:39 no-preload-449580 kubelet[2309]: I1017 20:10:39.661453    2309 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 20:10:39 no-preload-449580 kubelet[2309]: I1017 20:10:39.720138    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68b4r\" (UniqueName: \"kubernetes.io/projected/53d908ca-46ee-49bd-9de8-af09045721ef-kube-api-access-68b4r\") pod \"storage-provisioner\" (UID: \"53d908ca-46ee-49bd-9de8-af09045721ef\") " pod="kube-system/storage-provisioner"
	Oct 17 20:10:39 no-preload-449580 kubelet[2309]: I1017 20:10:39.720183    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/617d6937-5180-4329-853d-32a9b1c9f510-config-volume\") pod \"coredns-66bc5c9577-p4n86\" (UID: \"617d6937-5180-4329-853d-32a9b1c9f510\") " pod="kube-system/coredns-66bc5c9577-p4n86"
	Oct 17 20:10:39 no-preload-449580 kubelet[2309]: I1017 20:10:39.720202    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm8sf\" (UniqueName: \"kubernetes.io/projected/617d6937-5180-4329-853d-32a9b1c9f510-kube-api-access-cm8sf\") pod \"coredns-66bc5c9577-p4n86\" (UID: \"617d6937-5180-4329-853d-32a9b1c9f510\") " pod="kube-system/coredns-66bc5c9577-p4n86"
	Oct 17 20:10:39 no-preload-449580 kubelet[2309]: I1017 20:10:39.720220    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/53d908ca-46ee-49bd-9de8-af09045721ef-tmp\") pod \"storage-provisioner\" (UID: \"53d908ca-46ee-49bd-9de8-af09045721ef\") " pod="kube-system/storage-provisioner"
	Oct 17 20:10:40 no-preload-449580 kubelet[2309]: I1017 20:10:40.589314    2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p4n86" podStartSLOduration=14.589288901 podStartE2EDuration="14.589288901s" podCreationTimestamp="2025-10-17 20:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:10:40.579485756 +0000 UTC m=+20.164691284" watchObservedRunningTime="2025-10-17 20:10:40.589288901 +0000 UTC m=+20.174494431"
	Oct 17 20:10:40 no-preload-449580 kubelet[2309]: I1017 20:10:40.589526    2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.58951773 podStartE2EDuration="14.58951773s" podCreationTimestamp="2025-10-17 20:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:10:40.589140961 +0000 UTC m=+20.174346491" watchObservedRunningTime="2025-10-17 20:10:40.58951773 +0000 UTC m=+20.174723258"
	Oct 17 20:10:42 no-preload-449580 kubelet[2309]: I1017 20:10:42.740192    2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59wxx\" (UniqueName: \"kubernetes.io/projected/f84fa1c2-b435-4d7e-8356-a847e5291ee8-kube-api-access-59wxx\") pod \"busybox\" (UID: \"f84fa1c2-b435-4d7e-8356-a847e5291ee8\") " pod="default/busybox"
	Oct 17 20:10:45 no-preload-449580 kubelet[2309]: I1017 20:10:45.593218    2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.581017881 podStartE2EDuration="3.593196837s" podCreationTimestamp="2025-10-17 20:10:42 +0000 UTC" firstStartedPulling="2025-10-17 20:10:43.007334807 +0000 UTC m=+22.592540331" lastFinishedPulling="2025-10-17 20:10:45.019513762 +0000 UTC m=+24.604719287" observedRunningTime="2025-10-17 20:10:45.593005482 +0000 UTC m=+25.178211011" watchObservedRunningTime="2025-10-17 20:10:45.593196837 +0000 UTC m=+25.178402362"
	
	
	==> storage-provisioner [234dda6ffc9fd55eecd05a546d75b63661cd0c0ee39c5ae7c27c9733f941389f] <==
	I1017 20:10:40.063096       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:10:40.071864       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:10:40.072007       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:10:40.074264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:40.080313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:10:40.080463       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:10:40.080759       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-449580_fff17e48-75da-4613-9855-bb9c599d21b4!
	I1017 20:10:40.080761       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0252561b-3175-478a-ae66-c43f417b884b", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-449580_fff17e48-75da-4613-9855-bb9c599d21b4 became leader
	W1017 20:10:40.083610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:40.087446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:10:40.181591       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-449580_fff17e48-75da-4613-9855-bb9c599d21b4!
	W1017 20:10:42.091223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:42.096310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:44.099474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:44.103434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:46.106846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:46.111081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:48.114957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:48.119290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:50.123082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:50.127322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:52.130490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:52.136631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-449580 -n no-preload-449580
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-449580 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-726816 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-726816 --alsologtostderr -v=1: exit status 80 (1.722077046s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-726816 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:11:44.014703  374001 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:11:44.014824  374001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:11:44.014833  374001 out.go:374] Setting ErrFile to fd 2...
	I1017 20:11:44.014837  374001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:11:44.015036  374001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:11:44.015291  374001 out.go:368] Setting JSON to false
	I1017 20:11:44.015339  374001 mustload.go:65] Loading cluster: old-k8s-version-726816
	I1017 20:11:44.015651  374001 config.go:182] Loaded profile config "old-k8s-version-726816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:11:44.016048  374001 cli_runner.go:164] Run: docker container inspect old-k8s-version-726816 --format={{.State.Status}}
	I1017 20:11:44.034891  374001 host.go:66] Checking if "old-k8s-version-726816" exists ...
	I1017 20:11:44.035170  374001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:11:44.092769  374001 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-17 20:11:44.082361712 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:11:44.093436  374001 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-726816 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:11:44.096180  374001 out.go:179] * Pausing node old-k8s-version-726816 ... 
	I1017 20:11:44.097947  374001 host.go:66] Checking if "old-k8s-version-726816" exists ...
	I1017 20:11:44.098252  374001 ssh_runner.go:195] Run: systemctl --version
	I1017 20:11:44.098297  374001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-726816
	I1017 20:11:44.116561  374001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/old-k8s-version-726816/id_rsa Username:docker}
	I1017 20:11:44.214481  374001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:11:44.227080  374001 pause.go:52] kubelet running: true
	I1017 20:11:44.227163  374001 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:11:44.388465  374001 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:11:44.388626  374001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:11:44.462138  374001 cri.go:89] found id: "747137e5be4af0d94b6f109788cf1c1b9bafca36a0e7247a8a3f79cd60d8826b"
	I1017 20:11:44.462165  374001 cri.go:89] found id: "ebb776b4595c362bf346440793ab3e48e5a12e2379b9bcedfa1606c7e7878296"
	I1017 20:11:44.462170  374001 cri.go:89] found id: "91b37cb25594bd4a4037da457468ce8ab04be8d76be1ea150b98cac55be126b1"
	I1017 20:11:44.462174  374001 cri.go:89] found id: "d366f49e228b9559f5390fd4d4d8cbe630c4d711c715aac5c52834352215ef1c"
	I1017 20:11:44.462178  374001 cri.go:89] found id: "c68be51b1893d00600739733307a7ad07027891e96caa6eb528ee3a047f5c923"
	I1017 20:11:44.462183  374001 cri.go:89] found id: "968b01f15b0332d8945cfd8c8d6e9d02cb2f9635511ccc519c0bcf9750467356"
	I1017 20:11:44.462188  374001 cri.go:89] found id: "7881cbacb992a19527a25f5d1cce67db8caefd2e7da59b056d1c86a577aedfc1"
	I1017 20:11:44.462192  374001 cri.go:89] found id: "8d9c2dfa70a1ee7b1f6e3a8806e27f4e0cc7037f6cac3b6bdd2e92b821979c8e"
	I1017 20:11:44.462196  374001 cri.go:89] found id: "1bc61bd7d0ccf139a7202056a01d7760285248ec1015158005831ced4f43e0e7"
	I1017 20:11:44.462205  374001 cri.go:89] found id: "3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666"
	I1017 20:11:44.462209  374001 cri.go:89] found id: "6fc9076dca48eb3cdde728afd925cc98ddad1f05f397cf21464426ab3aba4eb1"
	I1017 20:11:44.462213  374001 cri.go:89] found id: ""
	I1017 20:11:44.462259  374001 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:11:44.475371  374001 retry.go:31] will retry after 343.031172ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:11:44Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:11:44.818851  374001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:11:44.832872  374001 pause.go:52] kubelet running: false
	I1017 20:11:44.832925  374001 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:11:44.977416  374001 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:11:44.977515  374001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:11:45.050176  374001 cri.go:89] found id: "747137e5be4af0d94b6f109788cf1c1b9bafca36a0e7247a8a3f79cd60d8826b"
	I1017 20:11:45.050201  374001 cri.go:89] found id: "ebb776b4595c362bf346440793ab3e48e5a12e2379b9bcedfa1606c7e7878296"
	I1017 20:11:45.050205  374001 cri.go:89] found id: "91b37cb25594bd4a4037da457468ce8ab04be8d76be1ea150b98cac55be126b1"
	I1017 20:11:45.050208  374001 cri.go:89] found id: "d366f49e228b9559f5390fd4d4d8cbe630c4d711c715aac5c52834352215ef1c"
	I1017 20:11:45.050210  374001 cri.go:89] found id: "c68be51b1893d00600739733307a7ad07027891e96caa6eb528ee3a047f5c923"
	I1017 20:11:45.050213  374001 cri.go:89] found id: "968b01f15b0332d8945cfd8c8d6e9d02cb2f9635511ccc519c0bcf9750467356"
	I1017 20:11:45.050216  374001 cri.go:89] found id: "7881cbacb992a19527a25f5d1cce67db8caefd2e7da59b056d1c86a577aedfc1"
	I1017 20:11:45.050218  374001 cri.go:89] found id: "8d9c2dfa70a1ee7b1f6e3a8806e27f4e0cc7037f6cac3b6bdd2e92b821979c8e"
	I1017 20:11:45.050220  374001 cri.go:89] found id: "1bc61bd7d0ccf139a7202056a01d7760285248ec1015158005831ced4f43e0e7"
	I1017 20:11:45.050227  374001 cri.go:89] found id: "3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666"
	I1017 20:11:45.050229  374001 cri.go:89] found id: "6fc9076dca48eb3cdde728afd925cc98ddad1f05f397cf21464426ab3aba4eb1"
	I1017 20:11:45.050231  374001 cri.go:89] found id: ""
	I1017 20:11:45.050275  374001 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:11:45.063231  374001 retry.go:31] will retry after 358.222346ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:11:45Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:11:45.421817  374001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:11:45.436453  374001 pause.go:52] kubelet running: false
	I1017 20:11:45.436519  374001 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:11:45.588083  374001 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:11:45.588171  374001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:11:45.664257  374001 cri.go:89] found id: "747137e5be4af0d94b6f109788cf1c1b9bafca36a0e7247a8a3f79cd60d8826b"
	I1017 20:11:45.664284  374001 cri.go:89] found id: "ebb776b4595c362bf346440793ab3e48e5a12e2379b9bcedfa1606c7e7878296"
	I1017 20:11:45.664290  374001 cri.go:89] found id: "91b37cb25594bd4a4037da457468ce8ab04be8d76be1ea150b98cac55be126b1"
	I1017 20:11:45.664293  374001 cri.go:89] found id: "d366f49e228b9559f5390fd4d4d8cbe630c4d711c715aac5c52834352215ef1c"
	I1017 20:11:45.664296  374001 cri.go:89] found id: "c68be51b1893d00600739733307a7ad07027891e96caa6eb528ee3a047f5c923"
	I1017 20:11:45.664300  374001 cri.go:89] found id: "968b01f15b0332d8945cfd8c8d6e9d02cb2f9635511ccc519c0bcf9750467356"
	I1017 20:11:45.664302  374001 cri.go:89] found id: "7881cbacb992a19527a25f5d1cce67db8caefd2e7da59b056d1c86a577aedfc1"
	I1017 20:11:45.664316  374001 cri.go:89] found id: "8d9c2dfa70a1ee7b1f6e3a8806e27f4e0cc7037f6cac3b6bdd2e92b821979c8e"
	I1017 20:11:45.664318  374001 cri.go:89] found id: "1bc61bd7d0ccf139a7202056a01d7760285248ec1015158005831ced4f43e0e7"
	I1017 20:11:45.664332  374001 cri.go:89] found id: "3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666"
	I1017 20:11:45.664335  374001 cri.go:89] found id: "6fc9076dca48eb3cdde728afd925cc98ddad1f05f397cf21464426ab3aba4eb1"
	I1017 20:11:45.664337  374001 cri.go:89] found id: ""
	I1017 20:11:45.664374  374001 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:11:45.679444  374001 out.go:203] 
	W1017 20:11:45.680979  374001 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:11:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:11:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:11:45.681015  374001 out.go:285] * 
	* 
	W1017 20:11:45.685331  374001 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:11:45.686765  374001 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-726816 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-726816
helpers_test.go:243: (dbg) docker inspect old-k8s-version-726816:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d",
	        "Created": "2025-10-17T20:09:36.13713151Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 365834,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:10:48.649795348Z",
	            "FinishedAt": "2025-10-17T20:10:47.795755739Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/hostname",
	        "HostsPath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/hosts",
	        "LogPath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d-json.log",
	        "Name": "/old-k8s-version-726816",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-726816:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-726816",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d",
	                "LowerDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-726816",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-726816/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-726816",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-726816",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-726816",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1eb1619b61aaef7b358e4c292e0071d83beec24bdd94d99b443d0be673341be2",
	            "SandboxKey": "/var/run/docker/netns/1eb1619b61aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-726816": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:73:3c:95:0c:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a2f3c9774d269d6de3a98b72179a7362d7a29c679daa09f837b76252bd896b76",
	                    "EndpointID": "dffde74fb416ead1b5e599083f84261984bf81bf394be56d27d2bde8956567e3",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-726816",
	                        "5fe53cd658e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-726816 -n old-k8s-version-726816
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-726816 -n old-k8s-version-726816: exit status 2 (334.353418ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-726816 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-726816 logs -n 25: (1.179130815s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cilium-684669                                                                                                                                                                                                                              │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p running-upgrade-097245 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                                          │ running-upgrade-097245    │ jenkins │ v1.32.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p force-systemd-env-834947                                                                                                                                                                                                                   │ force-systemd-env-834947  │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p cert-expiration-202048 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-202048    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p missing-upgrade-159057 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-159057    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ stop    │ -p kubernetes-upgrade-660693                                                                                                                                                                                                                  │ kubernetes-upgrade-660693 │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-660693 │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ start   │ -p running-upgrade-097245 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p missing-upgrade-159057                                                                                                                                                                                                                     │ missing-upgrade-159057    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p force-systemd-flag-599050 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p running-upgrade-097245                                                                                                                                                                                                                     │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ ssh     │ force-systemd-flag-599050 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p force-systemd-flag-599050                                                                                                                                                                                                                  │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-726816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p old-k8s-version-726816 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-726816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-449580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p no-preload-449580 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable dashboard -p no-preload-449580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	│ image   │ old-k8s-version-726816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ pause   │ -p old-k8s-version-726816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:11:09
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:11:09.309966  369697 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:11:09.310208  369697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:11:09.310217  369697 out.go:374] Setting ErrFile to fd 2...
	I1017 20:11:09.310220  369697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:11:09.310449  369697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:11:09.310972  369697 out.go:368] Setting JSON to false
	I1017 20:11:09.312229  369697 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6817,"bootTime":1760725052,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:11:09.312334  369697 start.go:141] virtualization: kvm guest
	I1017 20:11:09.314475  369697 out.go:179] * [no-preload-449580] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:11:09.315904  369697 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:11:09.315897  369697 notify.go:220] Checking for updates...
	I1017 20:11:09.317584  369697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:11:09.319369  369697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:11:09.320988  369697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:11:09.322424  369697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:11:09.324061  369697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:11:09.325990  369697 config.go:182] Loaded profile config "no-preload-449580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:11:09.326672  369697 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:11:09.352325  369697 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:11:09.352431  369697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:11:09.414899  369697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:11:09.403433283 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:11:09.415005  369697 docker.go:318] overlay module found
	I1017 20:11:09.418197  369697 out.go:179] * Using the docker driver based on existing profile
	I1017 20:11:09.419596  369697 start.go:305] selected driver: docker
	I1017 20:11:09.419622  369697 start.go:925] validating driver "docker" against &{Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:11:09.419763  369697 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:11:09.420416  369697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:11:09.478589  369697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:11:09.467148832 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:11:09.478931  369697 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:11:09.478962  369697 cni.go:84] Creating CNI manager for ""
	I1017 20:11:09.479055  369697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:11:09.479098  369697 start.go:349] cluster config:
	{Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:11:09.481268  369697 out.go:179] * Starting "no-preload-449580" primary control-plane node in "no-preload-449580" cluster
	I1017 20:11:09.482697  369697 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:11:09.484146  369697 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:11:09.485444  369697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:11:09.485559  369697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:11:09.485580  369697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/config.json ...
	I1017 20:11:09.485881  369697 cache.go:107] acquiring lock: {Name:mkd0df842d4d8da119c6855ae5b215973a1bd054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.485942  369697 cache.go:107] acquiring lock: {Name:mkb1ea73854f03abddddc66ea6d8ff48980053b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.485935  369697 cache.go:107] acquiring lock: {Name:mk495930b32aab4137b78173fcb5d9cf58d8239c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.485991  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1017 20:11:09.485962  369697 cache.go:107] acquiring lock: {Name:mk79978b0094a0a4fe274208f9bd0f469915fa13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.486022  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1017 20:11:09.486034  369697 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 100µs
	I1017 20:11:09.486036  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1017 20:11:09.486049  369697 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 112.078µs
	I1017 20:11:09.486054  369697 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1017 20:11:09.486058  369697 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1017 20:11:09.486005  369697 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.694µs
	I1017 20:11:09.486077  369697 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1017 20:11:09.485881  369697 cache.go:107] acquiring lock: {Name:mk95a64393bf585bd3acb10c28b2e4486b82554a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.486064  369697 cache.go:107] acquiring lock: {Name:mk1e16df1578e3f66034d7e28be03b6ac01b470a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.486093  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1017 20:11:09.486101  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1017 20:11:09.486105  369697 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 245.56µs
	I1017 20:11:09.486104  369697 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 206.043µs
	I1017 20:11:09.486122  369697 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1017 20:11:09.486049  369697 cache.go:107] acquiring lock: {Name:mk47a558c7bfc49677b52c17a6cb39d0217750ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.485891  369697 cache.go:107] acquiring lock: {Name:mk58620b56df75044fc4da2f75d8900d628a7966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.486223  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1017 20:11:09.486127  369697 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1017 20:11:09.486240  369697 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 266.65µs
	I1017 20:11:09.486252  369697 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1017 20:11:09.486223  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1017 20:11:09.486274  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1017 20:11:09.486298  369697 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 415.745µs
	I1017 20:11:09.486311  369697 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1017 20:11:09.486273  369697 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 290.72µs
	I1017 20:11:09.486332  369697 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1017 20:11:09.486340  369697 cache.go:87] Successfully saved all images to host disk.
	I1017 20:11:09.507355  369697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:11:09.507377  369697 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:11:09.507395  369697 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:11:09.507421  369697 start.go:360] acquireMachinesLock for no-preload-449580: {Name:mk19bcf32a0d1bfb1bd4e113ba01604af981e85e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.507474  369697 start.go:364] duration metric: took 37.038µs to acquireMachinesLock for "no-preload-449580"
	I1017 20:11:09.507493  369697 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:11:09.507498  369697 fix.go:54] fixHost starting: 
	I1017 20:11:09.507830  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:09.526695  369697 fix.go:112] recreateIfNeeded on no-preload-449580: state=Stopped err=<nil>
	W1017 20:11:09.526752  369697 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:11:08.515833  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1017 20:11:08.515905  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:08.515972  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:08.544491  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:08.544514  344862 cri.go:89] found id: "924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	I1017 20:11:08.544518  344862 cri.go:89] found id: ""
	I1017 20:11:08.544526  344862 logs.go:282] 2 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709]
	I1017 20:11:08.544576  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:08.548791  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:08.553205  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:08.553280  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:08.581429  344862 cri.go:89] found id: ""
	I1017 20:11:08.581454  344862 logs.go:282] 0 containers: []
	W1017 20:11:08.581462  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:08.581468  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:08.581515  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:08.609680  344862 cri.go:89] found id: ""
	I1017 20:11:08.609715  344862 logs.go:282] 0 containers: []
	W1017 20:11:08.609728  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:08.609755  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:08.609812  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:08.638035  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:08.638061  344862 cri.go:89] found id: ""
	I1017 20:11:08.638071  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:08.638137  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:08.642210  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:08.642287  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:08.670140  344862 cri.go:89] found id: ""
	I1017 20:11:08.670167  344862 logs.go:282] 0 containers: []
	W1017 20:11:08.670178  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:08.670189  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:08.670256  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:08.699173  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:08.699201  344862 cri.go:89] found id: "a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc"
	I1017 20:11:08.699206  344862 cri.go:89] found id: ""
	I1017 20:11:08.699214  344862 logs.go:282] 2 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2 a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc]
	I1017 20:11:08.699262  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:08.703348  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:08.707502  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:08.707576  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:08.739927  344862 cri.go:89] found id: ""
	I1017 20:11:08.739960  344862 logs.go:282] 0 containers: []
	W1017 20:11:08.739973  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:08.739980  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:08.740045  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:08.772763  344862 cri.go:89] found id: ""
	I1017 20:11:08.772793  344862 logs.go:282] 0 containers: []
	W1017 20:11:08.772803  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:08.772821  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:08.772836  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:08.822890  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:08.822931  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:08.858423  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:08.858454  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:11:08.946461  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:08.946503  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:08.871511  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:11.374176  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	I1017 20:11:09.529089  369697 out.go:252] * Restarting existing docker container for "no-preload-449580" ...
	I1017 20:11:09.529197  369697 cli_runner.go:164] Run: docker start no-preload-449580
	I1017 20:11:09.784940  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:09.807039  369697 kic.go:430] container "no-preload-449580" state is running.
	I1017 20:11:09.807422  369697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-449580
	I1017 20:11:09.827219  369697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/config.json ...
	I1017 20:11:09.827497  369697 machine.go:93] provisionDockerMachine start ...
	I1017 20:11:09.827582  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:09.847150  369697 main.go:141] libmachine: Using SSH client type: native
	I1017 20:11:09.847413  369697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 20:11:09.847427  369697 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:11:09.848075  369697 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39148->127.0.0.1:33184: read: connection reset by peer
	I1017 20:11:13.005010  369697 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-449580
	
	I1017 20:11:13.005039  369697 ubuntu.go:182] provisioning hostname "no-preload-449580"
	I1017 20:11:13.005126  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:13.029495  369697 main.go:141] libmachine: Using SSH client type: native
	I1017 20:11:13.029829  369697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 20:11:13.029866  369697 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-449580 && echo "no-preload-449580" | sudo tee /etc/hostname
	I1017 20:11:13.197081  369697 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-449580
	
	I1017 20:11:13.197191  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:13.220534  369697 main.go:141] libmachine: Using SSH client type: native
	I1017 20:11:13.220907  369697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 20:11:13.220933  369697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-449580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-449580/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-449580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:11:13.371904  369697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:11:13.371937  369697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:11:13.371972  369697 ubuntu.go:190] setting up certificates
	I1017 20:11:13.371983  369697 provision.go:84] configureAuth start
	I1017 20:11:13.372088  369697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-449580
	I1017 20:11:13.391815  369697 provision.go:143] copyHostCerts
	I1017 20:11:13.391885  369697 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:11:13.391902  369697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:11:13.391979  369697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:11:13.393129  369697 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:11:13.393150  369697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:11:13.393192  369697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:11:13.393267  369697 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:11:13.393286  369697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:11:13.393314  369697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:11:13.393365  369697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.no-preload-449580 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-449580]
	I1017 20:11:13.744061  369697 provision.go:177] copyRemoteCerts
	I1017 20:11:13.744140  369697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:11:13.744188  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:13.766722  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:13.875513  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:11:13.900515  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:11:13.924394  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:11:13.948134  369697 provision.go:87] duration metric: took 576.13484ms to configureAuth
	I1017 20:11:13.948164  369697 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:11:13.948396  369697 config.go:182] Loaded profile config "no-preload-449580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:11:13.948515  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:13.971558  369697 main.go:141] libmachine: Using SSH client type: native
	I1017 20:11:13.971916  369697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 20:11:13.971945  369697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:11:14.455814  369697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:11:14.455843  369697 machine.go:96] duration metric: took 4.628324875s to provisionDockerMachine
	I1017 20:11:14.455858  369697 start.go:293] postStartSetup for "no-preload-449580" (driver="docker")
	I1017 20:11:14.455871  369697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:11:14.455943  369697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:11:14.456014  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:14.478334  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:14.589292  369697 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:11:14.595110  369697 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:11:14.595148  369697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:11:14.595164  369697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:11:14.595236  369697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:11:14.595362  369697 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:11:14.595507  369697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:11:14.607816  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:11:14.634414  369697 start.go:296] duration metric: took 178.536291ms for postStartSetup
	I1017 20:11:14.634531  369697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:11:14.634583  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:14.658637  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:14.766088  369697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:11:14.773258  369697 fix.go:56] duration metric: took 5.265741716s for fixHost
	I1017 20:11:14.773296  369697 start.go:83] releasing machines lock for "no-preload-449580", held for 5.265809991s
	I1017 20:11:14.773375  369697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-449580
	I1017 20:11:14.796569  369697 ssh_runner.go:195] Run: cat /version.json
	I1017 20:11:14.796623  369697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:11:14.796628  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:14.796703  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:14.820095  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:14.820652  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:15.005027  369697 ssh_runner.go:195] Run: systemctl --version
	I1017 20:11:15.014767  369697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:11:15.063813  369697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:11:15.070784  369697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:11:15.070859  369697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:11:15.082291  369697 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:11:15.082319  369697 start.go:495] detecting cgroup driver to use...
	I1017 20:11:15.082356  369697 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:11:15.082404  369697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:11:15.105262  369697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:11:15.123903  369697 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:11:15.123964  369697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:11:15.146167  369697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:11:15.164332  369697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:11:15.275997  369697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:11:15.397994  369697 docker.go:234] disabling docker service ...
	I1017 20:11:15.398071  369697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:11:15.420570  369697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:11:15.438073  369697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:11:15.563348  369697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:11:15.662806  369697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:11:15.676940  369697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:11:15.693605  369697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:11:15.693675  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.703993  369697 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:11:15.704123  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.716079  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.726038  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.737273  369697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:11:15.748257  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.759653  369697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.771820  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.785001  369697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:11:15.794697  369697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:11:15.804198  369697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:11:15.921518  369697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:11:16.801402  369697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:11:16.801505  369697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:11:16.807082  369697 start.go:563] Will wait 60s for crictl version
	I1017 20:11:16.807155  369697 ssh_runner.go:195] Run: which crictl
	I1017 20:11:16.812773  369697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:11:16.845085  369697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:11:16.845171  369697 ssh_runner.go:195] Run: crio --version
	I1017 20:11:16.884182  369697 ssh_runner.go:195] Run: crio --version
	I1017 20:11:16.929915  369697 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:11:16.931726  369697 cli_runner.go:164] Run: docker network inspect no-preload-449580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:11:16.953119  369697 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1017 20:11:16.958179  369697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:11:16.972334  369697 kubeadm.go:883] updating cluster {Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:11:16.972487  369697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:11:16.972532  369697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:11:17.018057  369697 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:11:17.018081  369697 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:11:17.018089  369697 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1017 20:11:17.018198  369697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-449580 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:11:17.018298  369697 ssh_runner.go:195] Run: crio config
	I1017 20:11:17.082825  369697 cni.go:84] Creating CNI manager for ""
	I1017 20:11:17.082853  369697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:11:17.082875  369697 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:11:17.082908  369697 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-449580 NodeName:no-preload-449580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:11:17.083064  369697 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-449580"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:11:17.083146  369697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:11:17.094587  369697 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:11:17.094684  369697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:11:17.105420  369697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 20:11:17.124560  369697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:11:17.143255  369697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1017 20:11:17.160975  369697 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:11:17.166050  369697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:11:17.178161  369697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:11:17.271526  369697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:11:17.295806  369697 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580 for IP: 192.168.103.2
	I1017 20:11:17.295832  369697 certs.go:195] generating shared ca certs ...
	I1017 20:11:17.295853  369697 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:11:17.296045  369697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:11:17.296127  369697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:11:17.296145  369697 certs.go:257] generating profile certs ...
	I1017 20:11:17.296247  369697 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.key
	I1017 20:11:17.296322  369697 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.key.15dab988
	I1017 20:11:17.296382  369697 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.key
	I1017 20:11:17.296528  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:11:17.296563  369697 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:11:17.296576  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:11:17.296600  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:11:17.296621  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:11:17.296641  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:11:17.296693  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:11:17.297547  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:11:17.323192  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:11:17.348271  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:11:17.376073  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:11:17.406389  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 20:11:17.431556  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:11:17.456803  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:11:17.481444  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:11:17.506379  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:11:17.533328  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:11:17.558369  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:11:17.584046  369697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:11:17.602520  369697 ssh_runner.go:195] Run: openssl version
	I1017 20:11:17.611281  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:11:17.623462  369697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:11:17.629201  369697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:11:17.629289  369697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:11:17.681277  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:11:17.692953  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:11:17.706377  369697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:11:17.712206  369697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:11:17.712285  369697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:11:17.769117  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:11:17.779990  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:11:17.792341  369697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:11:17.798211  369697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:11:17.798273  369697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:11:17.854590  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:11:17.866662  369697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:11:17.874074  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:11:17.929263  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:11:17.995850  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:11:18.048085  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:11:18.092471  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:11:18.136040  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:11:18.181322  369697 kubeadm.go:400] StartCluster: {Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:11:18.181432  369697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:11:18.181514  369697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:11:18.218072  369697 cri.go:89] found id: "344d142d37fe5e0cf83f172832d2f0380baafcfe5af95563d75af080c8f38c3c"
	I1017 20:11:18.218101  369697 cri.go:89] found id: "6cf770e38746c4716bb308f95e151bdd97000b0a2142f8c26a0763b88060594f"
	I1017 20:11:18.218107  369697 cri.go:89] found id: "09d3164355d524c8b81db0b45da6184b8608f2453c76034f04243ff5a2366382"
	I1017 20:11:18.218111  369697 cri.go:89] found id: "da4d6ced5b128794ebcf1eb3fba8085c8b428be8cc20e7b0cbbeb23351ceb4d4"
	I1017 20:11:18.218115  369697 cri.go:89] found id: ""
	I1017 20:11:18.218169  369697 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:11:18.232876  369697 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:11:18Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:11:18.232963  369697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:11:18.243350  369697 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:11:18.243372  369697 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:11:18.243425  369697 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:11:18.252041  369697 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:11:18.253052  369697 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-449580" does not appear in /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:11:18.253650  369697 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-135723/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-449580" cluster setting kubeconfig missing "no-preload-449580" context setting]
	I1017 20:11:18.254694  369697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:11:18.256699  369697 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:11:18.265849  369697 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1017 20:11:18.265892  369697 kubeadm.go:601] duration metric: took 22.513294ms to restartPrimaryControlPlane
	I1017 20:11:18.265904  369697 kubeadm.go:402] duration metric: took 84.595638ms to StartCluster
	I1017 20:11:18.265935  369697 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:11:18.266007  369697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:11:18.267783  369697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:11:18.268056  369697 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:11:18.268111  369697 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:11:18.268260  369697 config.go:182] Loaded profile config "no-preload-449580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:11:18.268313  369697 addons.go:69] Setting default-storageclass=true in profile "no-preload-449580"
	I1017 20:11:18.268312  369697 addons.go:69] Setting dashboard=true in profile "no-preload-449580"
	I1017 20:11:18.268336  369697 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-449580"
	I1017 20:11:18.268341  369697 addons.go:238] Setting addon dashboard=true in "no-preload-449580"
	W1017 20:11:18.268350  369697 addons.go:247] addon dashboard should already be in state true
	I1017 20:11:18.268380  369697 host.go:66] Checking if "no-preload-449580" exists ...
	I1017 20:11:18.268542  369697 addons.go:69] Setting storage-provisioner=true in profile "no-preload-449580"
	I1017 20:11:18.268573  369697 addons.go:238] Setting addon storage-provisioner=true in "no-preload-449580"
	W1017 20:11:18.268589  369697 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:11:18.268622  369697 host.go:66] Checking if "no-preload-449580" exists ...
	I1017 20:11:18.268659  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:18.269151  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:18.269453  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:18.271876  369697 out.go:179] * Verifying Kubernetes components...
	I1017 20:11:18.273577  369697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:11:18.301350  369697 addons.go:238] Setting addon default-storageclass=true in "no-preload-449580"
	W1017 20:11:18.301375  369697 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:11:18.301403  369697 host.go:66] Checking if "no-preload-449580" exists ...
	I1017 20:11:18.301856  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:18.302662  369697 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 20:11:18.304361  369697 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 20:11:18.305887  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 20:11:18.305908  369697 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 20:11:18.305968  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:18.307997  369697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1017 20:11:13.872483  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:16.371852  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:18.374401  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	I1017 20:11:18.310061  369697 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:11:18.310083  369697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:11:18.310144  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:18.337046  369697 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:11:18.337083  369697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:11:18.337146  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:18.344242  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:18.344915  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:18.364268  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:18.439288  369697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:11:18.453620  369697 node_ready.go:35] waiting up to 6m0s for node "no-preload-449580" to be "Ready" ...
	I1017 20:11:18.471551  369697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:11:18.472160  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 20:11:18.472184  369697 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 20:11:18.487614  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 20:11:18.487642  369697 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 20:11:18.500967  369697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:11:18.504449  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 20:11:18.504476  369697 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 20:11:18.527921  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 20:11:18.527947  369697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 20:11:18.549141  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 20:11:18.549166  369697 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 20:11:18.564652  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 20:11:18.564681  369697 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 20:11:18.583489  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 20:11:18.583522  369697 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 20:11:18.598664  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 20:11:18.598689  369697 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 20:11:18.614079  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:11:18.614110  369697 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 20:11:18.629544  369697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:11:19.574263  369697 node_ready.go:49] node "no-preload-449580" is "Ready"
	I1017 20:11:19.574305  369697 node_ready.go:38] duration metric: took 1.120634369s for node "no-preload-449580" to be "Ready" ...
	I1017 20:11:19.574329  369697 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:11:19.574421  369697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:11:20.094382  369697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.622795832s)
	I1017 20:11:20.094444  369697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.593451666s)
	I1017 20:11:20.094799  369697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.465209602s)
	I1017 20:11:20.095081  369697 api_server.go:72] duration metric: took 1.826985712s to wait for apiserver process to appear ...
	I1017 20:11:20.095103  369697 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:11:20.095125  369697 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:11:20.097141  369697 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-449580 addons enable metrics-server
	
	I1017 20:11:20.100673  369697 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:11:20.100703  369697 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:11:20.105694  369697 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1017 20:11:19.010263  344862 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.063734586s)
	W1017 20:11:19.010317  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1017 20:11:19.010335  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:19.010348  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:19.075018  344862 logs.go:123] Gathering logs for kube-controller-manager [a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc] ...
	I1017 20:11:19.075112  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc"
	I1017 20:11:19.111254  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:19.111334  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:19.139790  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:19.139834  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:19.189730  344862 logs.go:123] Gathering logs for kube-apiserver [924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709] ...
	I1017 20:11:19.189786  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	I1017 20:11:19.223756  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:19.223793  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:21.757248  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1017 20:11:20.871255  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:23.370881  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	I1017 20:11:20.108700  369697 addons.go:514] duration metric: took 1.840585262s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 20:11:20.595899  369697 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:11:20.600468  369697 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:11:20.600501  369697 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:11:21.095933  369697 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:11:21.100871  369697 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 20:11:21.101907  369697 api_server.go:141] control plane version: v1.34.1
	I1017 20:11:21.101931  369697 api_server.go:131] duration metric: took 1.006820268s to wait for apiserver health ...
	I1017 20:11:21.101939  369697 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:11:21.106206  369697 system_pods.go:59] 8 kube-system pods found
	I1017 20:11:21.106249  369697 system_pods.go:61] "coredns-66bc5c9577-p4n86" [617d6937-5180-4329-853d-32a9b1c9f510] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:11:21.106260  369697 system_pods.go:61] "etcd-no-preload-449580" [fb200953-462a-4d0e-a897-8503ebe3a57f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:11:21.106271  369697 system_pods.go:61] "kindnet-9xg9h" [673bfee2-dc28-4a9a-815e-0f57d9dd92f8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 20:11:21.106285  369697 system_pods.go:61] "kube-apiserver-no-preload-449580" [4b67f8cf-2d87-4f26-9c70-08870061761a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:11:21.106298  369697 system_pods.go:61] "kube-controller-manager-no-preload-449580" [f1bb561c-bd36-440a-a61e-bae20669a3d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:11:21.106310  369697 system_pods.go:61] "kube-proxy-m5g7f" [b0d544c6-f6c2-459c-93b9-22452c8a77d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:11:21.106320  369697 system_pods.go:61] "kube-scheduler-no-preload-449580" [2f387b59-7741-4394-8cdd-791ef636b645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:11:21.106332  369697 system_pods.go:61] "storage-provisioner" [53d908ca-46ee-49bd-9de8-af09045721ef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:11:21.106343  369697 system_pods.go:74] duration metric: took 4.396853ms to wait for pod list to return data ...
	I1017 20:11:21.106360  369697 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:11:21.109083  369697 default_sa.go:45] found service account: "default"
	I1017 20:11:21.109107  369697 default_sa.go:55] duration metric: took 2.740469ms for default service account to be created ...
	I1017 20:11:21.109119  369697 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:11:21.112010  369697 system_pods.go:86] 8 kube-system pods found
	I1017 20:11:21.112041  369697 system_pods.go:89] "coredns-66bc5c9577-p4n86" [617d6937-5180-4329-853d-32a9b1c9f510] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:11:21.112049  369697 system_pods.go:89] "etcd-no-preload-449580" [fb200953-462a-4d0e-a897-8503ebe3a57f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:11:21.112058  369697 system_pods.go:89] "kindnet-9xg9h" [673bfee2-dc28-4a9a-815e-0f57d9dd92f8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 20:11:21.112066  369697 system_pods.go:89] "kube-apiserver-no-preload-449580" [4b67f8cf-2d87-4f26-9c70-08870061761a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:11:21.112072  369697 system_pods.go:89] "kube-controller-manager-no-preload-449580" [f1bb561c-bd36-440a-a61e-bae20669a3d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:11:21.112078  369697 system_pods.go:89] "kube-proxy-m5g7f" [b0d544c6-f6c2-459c-93b9-22452c8a77d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:11:21.112086  369697 system_pods.go:89] "kube-scheduler-no-preload-449580" [2f387b59-7741-4394-8cdd-791ef636b645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:11:21.112102  369697 system_pods.go:89] "storage-provisioner" [53d908ca-46ee-49bd-9de8-af09045721ef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:11:21.112113  369697 system_pods.go:126] duration metric: took 2.987402ms to wait for k8s-apps to be running ...
	I1017 20:11:21.112123  369697 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:11:21.112170  369697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:11:21.126489  369697 system_svc.go:56] duration metric: took 14.352119ms WaitForService to wait for kubelet
	I1017 20:11:21.126520  369697 kubeadm.go:586] duration metric: took 2.858428752s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:11:21.126538  369697 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:11:21.130113  369697 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:11:21.130152  369697 node_conditions.go:123] node cpu capacity is 8
	I1017 20:11:21.130170  369697 node_conditions.go:105] duration metric: took 3.625938ms to run NodePressure ...
	I1017 20:11:21.130187  369697 start.go:241] waiting for startup goroutines ...
	I1017 20:11:21.130197  369697 start.go:246] waiting for cluster config update ...
	I1017 20:11:21.130212  369697 start.go:255] writing updated cluster config ...
	I1017 20:11:21.130573  369697 ssh_runner.go:195] Run: rm -f paused
	I1017 20:11:21.135111  369697 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:11:21.139554  369697 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p4n86" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:11:23.144945  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	I1017 20:11:23.633422  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:56782->192.168.76.2:8443: read: connection reset by peer
	I1017 20:11:23.633499  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:23.633558  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:23.664997  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:23.665022  344862 cri.go:89] found id: "924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	I1017 20:11:23.665027  344862 cri.go:89] found id: ""
	I1017 20:11:23.665036  344862 logs.go:282] 2 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709]
	I1017 20:11:23.665106  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:23.669764  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:23.673631  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:23.673703  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:23.700454  344862 cri.go:89] found id: ""
	I1017 20:11:23.700480  344862 logs.go:282] 0 containers: []
	W1017 20:11:23.700487  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:23.700493  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:23.700538  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:23.730521  344862 cri.go:89] found id: ""
	I1017 20:11:23.730546  344862 logs.go:282] 0 containers: []
	W1017 20:11:23.730554  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:23.730560  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:23.730606  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:23.758499  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:23.758525  344862 cri.go:89] found id: ""
	I1017 20:11:23.758534  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:23.758596  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:23.762703  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:23.762798  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:23.790773  344862 cri.go:89] found id: ""
	I1017 20:11:23.790803  344862 logs.go:282] 0 containers: []
	W1017 20:11:23.790815  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:23.790823  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:23.790889  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:23.824954  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:23.824998  344862 cri.go:89] found id: "a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc"
	I1017 20:11:23.825004  344862 cri.go:89] found id: ""
	I1017 20:11:23.825014  344862 logs.go:282] 2 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2 a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc]
	I1017 20:11:23.825081  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:23.829632  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:23.834349  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:23.834409  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:23.865416  344862 cri.go:89] found id: ""
	I1017 20:11:23.865448  344862 logs.go:282] 0 containers: []
	W1017 20:11:23.865459  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:23.865467  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:23.865531  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:23.899096  344862 cri.go:89] found id: ""
	I1017 20:11:23.899138  344862 logs.go:282] 0 containers: []
	W1017 20:11:23.899150  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:23.899171  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:23.899187  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:11:24.005852  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:24.005898  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:24.073556  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:24.073595  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:24.073618  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:24.120002  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:24.120047  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:24.159964  344862 logs.go:123] Gathering logs for kube-controller-manager [a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc] ...
	I1017 20:11:24.160001  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc"
	I1017 20:11:24.198661  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:24.198699  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:24.227522  344862 logs.go:123] Gathering logs for kube-apiserver [924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709] ...
	I1017 20:11:24.227562  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	W1017 20:11:24.262931  344862 logs.go:130] failed kube-apiserver [924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709": Process exited with status 1
	stdout:
	
	stderr:
	E1017 20:11:24.259824    3598 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709\": container with ID starting with 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709 not found: ID does not exist" containerID="924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	time="2025-10-17T20:11:24Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709\": container with ID starting with 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1017 20:11:24.259824    3598 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709\": container with ID starting with 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709 not found: ID does not exist" containerID="924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	time="2025-10-17T20:11:24Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709\": container with ID starting with 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709 not found: ID does not exist"
	
	** /stderr **
	I1017 20:11:24.262961  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:24.262979  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:24.337101  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:24.337148  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:24.407972  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:24.408014  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:26.956886  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:26.957441  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:26.957511  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:26.957570  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:26.986878  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:26.986907  344862 cri.go:89] found id: ""
	I1017 20:11:26.986919  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:26.986983  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:26.991365  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:26.991439  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:27.020189  344862 cri.go:89] found id: ""
	I1017 20:11:27.020222  344862 logs.go:282] 0 containers: []
	W1017 20:11:27.020235  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:27.020242  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:27.020300  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:27.048308  344862 cri.go:89] found id: ""
	I1017 20:11:27.048340  344862 logs.go:282] 0 containers: []
	W1017 20:11:27.048353  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:27.048361  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:27.048423  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:27.078305  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:27.078325  344862 cri.go:89] found id: ""
	I1017 20:11:27.078333  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:27.078385  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:27.082965  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:27.083035  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:27.115195  344862 cri.go:89] found id: ""
	I1017 20:11:27.115222  344862 logs.go:282] 0 containers: []
	W1017 20:11:27.115230  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:27.115237  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:27.115304  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:27.145191  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:27.145217  344862 cri.go:89] found id: ""
	I1017 20:11:27.145228  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:27.145292  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:27.150380  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:27.150451  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:27.183023  344862 cri.go:89] found id: ""
	I1017 20:11:27.183058  344862 logs.go:282] 0 containers: []
	W1017 20:11:27.183069  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:27.183078  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:27.183142  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:27.215624  344862 cri.go:89] found id: ""
	I1017 20:11:27.215656  344862 logs.go:282] 0 containers: []
	W1017 20:11:27.215667  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:27.215678  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:27.215693  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:27.285546  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:27.285571  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:27.285591  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:27.327286  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:27.327327  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:27.395895  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:27.395939  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:27.431183  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:27.431212  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:27.482825  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:27.482870  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:27.519037  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:27.519071  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1017 20:11:25.371208  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:27.372251  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:25.145459  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	W1017 20:11:27.146492  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	W1017 20:11:29.372307  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	I1017 20:11:30.870784  365613 pod_ready.go:94] pod "coredns-5dd5756b68-xrnvz" is "Ready"
	I1017 20:11:30.870813  365613 pod_ready.go:86] duration metric: took 31.005781208s for pod "coredns-5dd5756b68-xrnvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:30.873886  365613 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:30.878830  365613 pod_ready.go:94] pod "etcd-old-k8s-version-726816" is "Ready"
	I1017 20:11:30.878859  365613 pod_ready.go:86] duration metric: took 4.941209ms for pod "etcd-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:30.881855  365613 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:30.886549  365613 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-726816" is "Ready"
	I1017 20:11:30.886575  365613 pod_ready.go:86] duration metric: took 4.69699ms for pod "kube-apiserver-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:30.889440  365613 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:31.069368  365613 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-726816" is "Ready"
	I1017 20:11:31.069394  365613 pod_ready.go:86] duration metric: took 179.926258ms for pod "kube-controller-manager-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:31.269156  365613 pod_ready.go:83] waiting for pod "kube-proxy-xp229" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:31.668596  365613 pod_ready.go:94] pod "kube-proxy-xp229" is "Ready"
	I1017 20:11:31.668627  365613 pod_ready.go:86] duration metric: took 399.446544ms for pod "kube-proxy-xp229" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:31.869279  365613 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:32.269164  365613 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-726816" is "Ready"
	I1017 20:11:32.269191  365613 pod_ready.go:86] duration metric: took 399.890288ms for pod "kube-scheduler-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:32.269203  365613 pod_ready.go:40] duration metric: took 32.408503539s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:11:32.315370  365613 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1017 20:11:32.319628  365613 out.go:203] 
	W1017 20:11:32.321206  365613 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1017 20:11:32.322545  365613 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1017 20:11:32.324306  365613 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-726816" cluster and "default" namespace by default
	I1017 20:11:27.625085  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:27.625121  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:30.150840  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:30.152251  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:30.152326  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:30.152389  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:30.190765  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:30.190796  344862 cri.go:89] found id: ""
	I1017 20:11:30.190807  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:30.190871  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:30.198196  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:30.198278  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:30.235078  344862 cri.go:89] found id: ""
	I1017 20:11:30.235110  344862 logs.go:282] 0 containers: []
	W1017 20:11:30.235122  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:30.235130  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:30.235198  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:30.266189  344862 cri.go:89] found id: ""
	I1017 20:11:30.266222  344862 logs.go:282] 0 containers: []
	W1017 20:11:30.266236  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:30.266245  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:30.266296  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:30.302098  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:30.302130  344862 cri.go:89] found id: ""
	I1017 20:11:30.302144  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:30.302212  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:30.306947  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:30.307024  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:30.347793  344862 cri.go:89] found id: ""
	I1017 20:11:30.347822  344862 logs.go:282] 0 containers: []
	W1017 20:11:30.347831  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:30.347837  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:30.347884  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:30.378723  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:30.378760  344862 cri.go:89] found id: ""
	I1017 20:11:30.378771  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:30.378834  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:30.383616  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:30.383689  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:30.416596  344862 cri.go:89] found id: ""
	I1017 20:11:30.416628  344862 logs.go:282] 0 containers: []
	W1017 20:11:30.416638  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:30.416645  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:30.416695  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:30.449864  344862 cri.go:89] found id: ""
	I1017 20:11:30.449902  344862 logs.go:282] 0 containers: []
	W1017 20:11:30.449915  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:30.449928  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:30.449970  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:30.484041  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:30.484091  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:11:30.580399  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:30.580440  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:30.601901  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:30.601943  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:30.673586  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:30.673615  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:30.673638  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:30.711383  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:30.711427  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:30.771567  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:30.771605  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:30.802475  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:30.802507  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1017 20:11:29.646264  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	W1017 20:11:32.145470  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	I1017 20:11:33.350633  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:33.351164  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:33.351221  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:33.351291  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:33.380099  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:33.380127  344862 cri.go:89] found id: ""
	I1017 20:11:33.380136  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:33.380194  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:33.384561  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:33.384621  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:33.413184  344862 cri.go:89] found id: ""
	I1017 20:11:33.413216  344862 logs.go:282] 0 containers: []
	W1017 20:11:33.413225  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:33.413231  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:33.413279  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:33.442873  344862 cri.go:89] found id: ""
	I1017 20:11:33.442902  344862 logs.go:282] 0 containers: []
	W1017 20:11:33.442910  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:33.442917  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:33.442970  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:33.471895  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:33.471919  344862 cri.go:89] found id: ""
	I1017 20:11:33.471929  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:33.471988  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:33.476614  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:33.476689  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:33.505550  344862 cri.go:89] found id: ""
	I1017 20:11:33.505580  344862 logs.go:282] 0 containers: []
	W1017 20:11:33.505591  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:33.505600  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:33.505668  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:33.534791  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:33.534817  344862 cri.go:89] found id: ""
	I1017 20:11:33.534832  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:33.534892  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:33.539320  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:33.539401  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:33.568534  344862 cri.go:89] found id: ""
	I1017 20:11:33.568559  344862 logs.go:282] 0 containers: []
	W1017 20:11:33.568577  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:33.568586  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:33.568640  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:33.595965  344862 cri.go:89] found id: ""
	I1017 20:11:33.595998  344862 logs.go:282] 0 containers: []
	W1017 20:11:33.596015  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:33.596027  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:33.596043  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:33.629984  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:33.630025  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:11:33.722344  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:33.722387  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:33.743220  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:33.743266  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:33.804264  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:33.804305  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:33.804319  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:33.836756  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:33.836796  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:33.890315  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:33.890368  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:33.918544  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:33.918572  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:36.467882  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:36.468400  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:36.468463  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:36.468517  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:36.497657  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:36.497679  344862 cri.go:89] found id: ""
	I1017 20:11:36.497689  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:36.497765  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:36.501932  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:36.502005  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:36.532047  344862 cri.go:89] found id: ""
	I1017 20:11:36.532092  344862 logs.go:282] 0 containers: []
	W1017 20:11:36.532103  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:36.532111  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:36.532172  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:36.563659  344862 cri.go:89] found id: ""
	I1017 20:11:36.563686  344862 logs.go:282] 0 containers: []
	W1017 20:11:36.563694  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:36.563701  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:36.563781  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:36.595006  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:36.595053  344862 cri.go:89] found id: ""
	I1017 20:11:36.595062  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:36.595109  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:36.599182  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:36.599263  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:36.626774  344862 cri.go:89] found id: ""
	I1017 20:11:36.626805  344862 logs.go:282] 0 containers: []
	W1017 20:11:36.626815  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:36.626824  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:36.626887  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:36.657682  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:36.657708  344862 cri.go:89] found id: ""
	I1017 20:11:36.657717  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:36.657788  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:36.662266  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:36.662349  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:36.691141  344862 cri.go:89] found id: ""
	I1017 20:11:36.691172  344862 logs.go:282] 0 containers: []
	W1017 20:11:36.691182  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:36.691190  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:36.691250  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:36.719684  344862 cri.go:89] found id: ""
	I1017 20:11:36.719709  344862 logs.go:282] 0 containers: []
	W1017 20:11:36.719717  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:36.719725  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:36.719770  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:36.773604  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:36.773642  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:36.801603  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:36.801632  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:36.850685  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:36.850725  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:36.882915  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:36.882946  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:11:36.974120  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:36.974159  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:36.993593  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:36.993641  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:37.053479  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:37.053502  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:37.053515  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	W1017 20:11:34.645858  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	W1017 20:11:37.147584  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	I1017 20:11:39.587830  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:39.588401  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:39.588463  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:39.588525  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:39.619000  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:39.619023  344862 cri.go:89] found id: ""
	I1017 20:11:39.619031  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:39.619079  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:39.623155  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:39.623241  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:39.651366  344862 cri.go:89] found id: ""
	I1017 20:11:39.651397  344862 logs.go:282] 0 containers: []
	W1017 20:11:39.651409  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:39.651416  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:39.651477  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:39.681335  344862 cri.go:89] found id: ""
	I1017 20:11:39.681358  344862 logs.go:282] 0 containers: []
	W1017 20:11:39.681365  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:39.681373  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:39.681420  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:39.710507  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:39.710534  344862 cri.go:89] found id: ""
	I1017 20:11:39.710544  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:39.710605  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:39.714719  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:39.714811  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:39.741277  344862 cri.go:89] found id: ""
	I1017 20:11:39.741301  344862 logs.go:282] 0 containers: []
	W1017 20:11:39.741313  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:39.741319  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:39.741366  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:39.769983  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:39.770006  344862 cri.go:89] found id: ""
	I1017 20:11:39.770017  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:39.770085  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:39.774236  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:39.774314  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:39.802649  344862 cri.go:89] found id: ""
	I1017 20:11:39.802681  344862 logs.go:282] 0 containers: []
	W1017 20:11:39.802693  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:39.802701  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:39.802786  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:39.831772  344862 cri.go:89] found id: ""
	I1017 20:11:39.831804  344862 logs.go:282] 0 containers: []
	W1017 20:11:39.831811  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:39.831822  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:39.831840  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:39.851041  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:39.851078  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:39.909691  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:39.909714  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:39.909749  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:39.942836  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:39.942871  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:39.995252  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:39.995291  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:40.025473  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:40.025502  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:40.073327  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:40.073367  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:40.104877  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:40.104913  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1017 20:11:39.645160  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	W1017 20:11:42.145515  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 17 20:11:19 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:19.144898953Z" level=info msg="Started container" PID=1716 containerID=f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper id=06fcca7e-1545-457c-a335-f55965d3152f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d2e176dea86e8b753724b307736b974d469fa9beed85f3e327265309a02e865
	Oct 17 20:11:20 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:20.095870522Z" level=info msg="Removing container: cc9fbaf140d10032ed4b9f836b67c1f2765d3394be508f4c1d2197d68dfc8cbd" id=df84eb8c-c326-4a14-93b9-4ee5cc71d3f6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:11:20 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:20.108079882Z" level=info msg="Removed container cc9fbaf140d10032ed4b9f836b67c1f2765d3394be508f4c1d2197d68dfc8cbd: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper" id=df84eb8c-c326-4a14-93b9-4ee5cc71d3f6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.123443257Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=001d432b-053e-48da-a708-405123846a98 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.135173098Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2d9fd98d-1135-4871-b18f-392b68b8ebc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.144301061Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6eae7233-d195-4087-a5ab-74f077124190 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.144669143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.191072172Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.191307476Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/245b9d436bc47f56fade302a7de9c473b44508fd36913af398a58626907dac2b/merged/etc/passwd: no such file or directory"
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.191344579Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/245b9d436bc47f56fade302a7de9c473b44508fd36913af398a58626907dac2b/merged/etc/group: no such file or directory"
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.191664937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.28708139Z" level=info msg="Created container 747137e5be4af0d94b6f109788cf1c1b9bafca36a0e7247a8a3f79cd60d8826b: kube-system/storage-provisioner/storage-provisioner" id=6eae7233-d195-4087-a5ab-74f077124190 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.287873223Z" level=info msg="Starting container: 747137e5be4af0d94b6f109788cf1c1b9bafca36a0e7247a8a3f79cd60d8826b" id=5b1984ef-58b4-48f1-9180-81e6517a2715 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.290049791Z" level=info msg="Started container" PID=1733 containerID=747137e5be4af0d94b6f109788cf1c1b9bafca36a0e7247a8a3f79cd60d8826b description=kube-system/storage-provisioner/storage-provisioner id=5b1984ef-58b4-48f1-9180-81e6517a2715 name=/runtime.v1.RuntimeService/StartContainer sandboxID=108af169199ad25456fa1076d65d0f31a742c90544a6b69c05024fd1f8684f93
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.01048975Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=93d2a4ec-8167-47e1-8ed9-9cefd7d95ed3 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.011511116Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ea123555-b96c-4919-b944-3838b4dcbefe name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.012505429Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper" id=9c2949bd-6108-42f3-8196-cacf7670427e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.012776313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.019378598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.019992672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.050913048Z" level=info msg="Created container 3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper" id=9c2949bd-6108-42f3-8196-cacf7670427e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.051648959Z" level=info msg="Starting container: 3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666" id=1f48cd45-d7ce-4f3a-8326-5903617da10e name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.053642051Z" level=info msg="Started container" PID=1768 containerID=3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper id=1f48cd45-d7ce-4f3a-8326-5903617da10e name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d2e176dea86e8b753724b307736b974d469fa9beed85f3e327265309a02e865
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.138997804Z" level=info msg="Removing container: f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea" id=093a694e-c337-4dec-8ee7-cc2c34c2affd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.150265618Z" level=info msg="Removed container f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper" id=093a694e-c337-4dec-8ee7-cc2c34c2affd name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	3a364cb5d70c9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   3d2e176dea86e       dashboard-metrics-scraper-5f989dc9cf-lfwq2       kubernetes-dashboard
	747137e5be4af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           16 seconds ago      Running             storage-provisioner         1                   108af169199ad       storage-provisioner                              kube-system
	6fc9076dca48e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   7cb20e4b89354       kubernetes-dashboard-8694d4445c-dkhv5            kubernetes-dashboard
	ebb776b4595c3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           47 seconds ago      Running             coredns                     0                   364f35a14eab9       coredns-5dd5756b68-xrnvz                         kube-system
	6f92088ac4c2c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   29a2a0d6d57c5       busybox                                          default
	91b37cb25594b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   5cea72cf33eeb       kindnet-9slhm                                    kube-system
	d366f49e228b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   108af169199ad       storage-provisioner                              kube-system
	c68be51b1893d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           47 seconds ago      Running             kube-proxy                  0                   6914ada52840c       kube-proxy-xp229                                 kube-system
	968b01f15b033       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           51 seconds ago      Running             kube-controller-manager     0                   80b58ce7f52bc       kube-controller-manager-old-k8s-version-726816   kube-system
	7881cbacb992a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           51 seconds ago      Running             etcd                        0                   eecf5e4d363ff       etcd-old-k8s-version-726816                      kube-system
	8d9c2dfa70a1e       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           51 seconds ago      Running             kube-scheduler              0                   100ae214e1dea       kube-scheduler-old-k8s-version-726816            kube-system
	1bc61bd7d0ccf       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           51 seconds ago      Running             kube-apiserver              0                   8a53f2948e6ae       kube-apiserver-old-k8s-version-726816            kube-system
	
	
	==> coredns [ebb776b4595c362bf346440793ab3e48e5a12e2379b9bcedfa1606c7e7878296] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38407 - 9919 "HINFO IN 7967464223475309590.1785099526618209064. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.111841044s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-726816
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-726816
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=old-k8s-version-726816
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_09_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:09:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-726816
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:11:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:11:28 +0000   Fri, 17 Oct 2025 20:09:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:11:28 +0000   Fri, 17 Oct 2025 20:09:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:11:28 +0000   Fri, 17 Oct 2025 20:09:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:11:28 +0000   Fri, 17 Oct 2025 20:10:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-726816
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                239cdd26-1e67-40fc-a3aa-17a6bcadd5b2
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-5dd5756b68-xrnvz                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-old-k8s-version-726816                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-9slhm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-old-k8s-version-726816             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-old-k8s-version-726816    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-xp229                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-old-k8s-version-726816             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-lfwq2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dkhv5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x9 over 2m)    kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node old-k8s-version-726816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)    kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node old-k8s-version-726816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s               kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s               node-controller  Node old-k8s-version-726816 event: Registered Node old-k8s-version-726816 in Controller
	  Normal  NodeReady                88s                kubelet          Node old-k8s-version-726816 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)  kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)  kubelet          Node old-k8s-version-726816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 51s)  kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node old-k8s-version-726816 event: Registered Node old-k8s-version-726816 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [7881cbacb992a19527a25f5d1cce67db8caefd2e7da59b056d1c86a577aedfc1] <==
	{"level":"info","ts":"2025-10-17T20:10:55.575226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:10:55.575237Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:10:55.575309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-10-17T20:10:55.575391Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-10-17T20:10:55.575551Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:10:55.575586Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:10:55.577708Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-17T20:10:55.57817Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T20:10:55.578934Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-17T20:10:55.578979Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-17T20:10:55.578096Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T20:10:57.264769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-17T20:10:57.26483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-17T20:10:57.264878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-10-17T20:10:57.264902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-10-17T20:10:57.26491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-17T20:10:57.264924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-10-17T20:10:57.264937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-17T20:10:57.266195Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-726816 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T20:10:57.266219Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:10:57.266204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:10:57.266505Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T20:10:57.266535Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-17T20:10:57.267461Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T20:10:57.267625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 20:11:46 up  1:54,  0 user,  load average: 3.35, 3.49, 2.33
	Linux old-k8s-version-726816 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [91b37cb25594bd4a4037da457468ce8ab04be8d76be1ea150b98cac55be126b1] <==
	I1017 20:10:59.657405       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:10:59.657714       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1017 20:10:59.657917       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:10:59.657934       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:10:59.657960       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:10:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:10:59.859048       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:10:59.859164       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:10:59.859183       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:10:59.859572       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:11:00.257131       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:11:00.257178       1 metrics.go:72] Registering metrics
	I1017 20:11:00.335341       1 controller.go:711] "Syncing nftables rules"
	I1017 20:11:09.858914       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:11:09.858994       1 main.go:301] handling current node
	I1017 20:11:19.859429       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:11:19.859467       1 main.go:301] handling current node
	I1017 20:11:29.859303       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:11:29.859352       1 main.go:301] handling current node
	I1017 20:11:39.862622       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:11:39.862656       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1bc61bd7d0ccf139a7202056a01d7760285248ec1015158005831ced4f43e0e7] <==
	I1017 20:10:58.307881       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:10:58.323586       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1017 20:10:58.365851       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1017 20:10:58.365879       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1017 20:10:58.365894       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1017 20:10:58.365858       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1017 20:10:58.366009       1 aggregator.go:166] initial CRD sync complete...
	I1017 20:10:58.366025       1 autoregister_controller.go:141] Starting autoregister controller
	I1017 20:10:58.366032       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:10:58.366041       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:10:58.366082       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1017 20:10:58.366092       1 shared_informer.go:318] Caches are synced for configmaps
	I1017 20:10:58.366139       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1017 20:10:58.376183       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:10:59.159821       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 20:10:59.197465       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 20:10:59.219359       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:10:59.229322       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:10:59.237599       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 20:10:59.269196       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:10:59.279691       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.14.236"}
	I1017 20:10:59.311391       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.88.149"}
	I1017 20:11:11.152776       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 20:11:11.252013       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:11:11.353276       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [968b01f15b0332d8945cfd8c8d6e9d02cb2f9635511ccc519c0bcf9750467356] <==
	I1017 20:11:11.052949       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 20:11:11.356350       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1017 20:11:11.357648       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1017 20:11:11.366229       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-lfwq2"
	I1017 20:11:11.366257       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-dkhv5"
	I1017 20:11:11.366735       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:11:11.374525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.121774ms"
	I1017 20:11:11.374811       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.994172ms"
	I1017 20:11:11.381927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.336727ms"
	I1017 20:11:11.382242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.508µs"
	I1017 20:11:11.381998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.139566ms"
	I1017 20:11:11.382324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="36.924µs"
	I1017 20:11:11.390857       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.913µs"
	I1017 20:11:11.399353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.176µs"
	I1017 20:11:11.414014       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:11:11.414052       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 20:11:16.190208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.676575ms"
	I1017 20:11:16.190377       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="113.786µs"
	I1017 20:11:19.104920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.37µs"
	I1017 20:11:20.108767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.903µs"
	I1017 20:11:21.110652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="100.967µs"
	I1017 20:11:30.660127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.222555ms"
	I1017 20:11:30.660374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.592µs"
	I1017 20:11:35.150330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.923µs"
	I1017 20:11:41.688989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.611µs"
	
	
	==> kube-proxy [c68be51b1893d00600739733307a7ad07027891e96caa6eb528ee3a047f5c923] <==
	I1017 20:10:59.428447       1 server_others.go:69] "Using iptables proxy"
	I1017 20:10:59.438309       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1017 20:10:59.458846       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:10:59.461880       1 server_others.go:152] "Using iptables Proxier"
	I1017 20:10:59.461922       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 20:10:59.461933       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 20:10:59.461974       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 20:10:59.462282       1 server.go:846] "Version info" version="v1.28.0"
	I1017 20:10:59.462301       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:10:59.463037       1 config.go:97] "Starting endpoint slice config controller"
	I1017 20:10:59.463095       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 20:10:59.463138       1 config.go:188] "Starting service config controller"
	I1017 20:10:59.463148       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 20:10:59.463250       1 config.go:315] "Starting node config controller"
	I1017 20:10:59.463274       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 20:10:59.564281       1 shared_informer.go:318] Caches are synced for service config
	I1017 20:10:59.564304       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1017 20:10:59.564358       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8d9c2dfa70a1ee7b1f6e3a8806e27f4e0cc7037f6cac3b6bdd2e92b821979c8e] <==
	I1017 20:10:55.958439       1 serving.go:348] Generated self-signed cert in-memory
	W1017 20:10:58.292250       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:10:58.292285       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:10:58.292304       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:10:58.292313       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:10:58.323431       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1017 20:10:58.323578       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:10:58.325603       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:10:58.325821       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1017 20:10:58.326450       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1017 20:10:58.326533       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1017 20:10:58.426192       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 20:11:11 old-k8s-version-726816 kubelet[720]: I1017 20:11:11.376450     720 topology_manager.go:215] "Topology Admit Handler" podUID="8d572a9b-dd03-4904-83d4-3dfb0680522e" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-dkhv5"
	Oct 17 20:11:11 old-k8s-version-726816 kubelet[720]: I1017 20:11:11.491007     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8d572a9b-dd03-4904-83d4-3dfb0680522e-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-dkhv5\" (UID: \"8d572a9b-dd03-4904-83d4-3dfb0680522e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dkhv5"
	Oct 17 20:11:11 old-k8s-version-726816 kubelet[720]: I1017 20:11:11.491071     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qckdf\" (UniqueName: \"kubernetes.io/projected/67f5e328-8b65-4fa4-a45e-40382fe9fed8-kube-api-access-qckdf\") pod \"dashboard-metrics-scraper-5f989dc9cf-lfwq2\" (UID: \"67f5e328-8b65-4fa4-a45e-40382fe9fed8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2"
	Oct 17 20:11:11 old-k8s-version-726816 kubelet[720]: I1017 20:11:11.491214     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42d55\" (UniqueName: \"kubernetes.io/projected/8d572a9b-dd03-4904-83d4-3dfb0680522e-kube-api-access-42d55\") pod \"kubernetes-dashboard-8694d4445c-dkhv5\" (UID: \"8d572a9b-dd03-4904-83d4-3dfb0680522e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dkhv5"
	Oct 17 20:11:11 old-k8s-version-726816 kubelet[720]: I1017 20:11:11.491258     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/67f5e328-8b65-4fa4-a45e-40382fe9fed8-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-lfwq2\" (UID: \"67f5e328-8b65-4fa4-a45e-40382fe9fed8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2"
	Oct 17 20:11:16 old-k8s-version-726816 kubelet[720]: I1017 20:11:16.123129     720 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dkhv5" podStartSLOduration=1.121916445 podCreationTimestamp="2025-10-17 20:11:11 +0000 UTC" firstStartedPulling="2025-10-17 20:11:11.702326452 +0000 UTC m=+16.789863038" lastFinishedPulling="2025-10-17 20:11:15.703474244 +0000 UTC m=+20.791010836" observedRunningTime="2025-10-17 20:11:16.122663101 +0000 UTC m=+21.210199694" watchObservedRunningTime="2025-10-17 20:11:16.123064243 +0000 UTC m=+21.210600839"
	Oct 17 20:11:19 old-k8s-version-726816 kubelet[720]: I1017 20:11:19.090314     720 scope.go:117] "RemoveContainer" containerID="cc9fbaf140d10032ed4b9f836b67c1f2765d3394be508f4c1d2197d68dfc8cbd"
	Oct 17 20:11:20 old-k8s-version-726816 kubelet[720]: I1017 20:11:20.094453     720 scope.go:117] "RemoveContainer" containerID="cc9fbaf140d10032ed4b9f836b67c1f2765d3394be508f4c1d2197d68dfc8cbd"
	Oct 17 20:11:20 old-k8s-version-726816 kubelet[720]: I1017 20:11:20.094640     720 scope.go:117] "RemoveContainer" containerID="f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea"
	Oct 17 20:11:20 old-k8s-version-726816 kubelet[720]: E1017 20:11:20.095010     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lfwq2_kubernetes-dashboard(67f5e328-8b65-4fa4-a45e-40382fe9fed8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2" podUID="67f5e328-8b65-4fa4-a45e-40382fe9fed8"
	Oct 17 20:11:21 old-k8s-version-726816 kubelet[720]: I1017 20:11:21.098731     720 scope.go:117] "RemoveContainer" containerID="f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea"
	Oct 17 20:11:21 old-k8s-version-726816 kubelet[720]: E1017 20:11:21.099032     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lfwq2_kubernetes-dashboard(67f5e328-8b65-4fa4-a45e-40382fe9fed8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2" podUID="67f5e328-8b65-4fa4-a45e-40382fe9fed8"
	Oct 17 20:11:22 old-k8s-version-726816 kubelet[720]: I1017 20:11:22.100967     720 scope.go:117] "RemoveContainer" containerID="f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea"
	Oct 17 20:11:22 old-k8s-version-726816 kubelet[720]: E1017 20:11:22.101243     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lfwq2_kubernetes-dashboard(67f5e328-8b65-4fa4-a45e-40382fe9fed8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2" podUID="67f5e328-8b65-4fa4-a45e-40382fe9fed8"
	Oct 17 20:11:30 old-k8s-version-726816 kubelet[720]: I1017 20:11:30.122879     720 scope.go:117] "RemoveContainer" containerID="d366f49e228b9559f5390fd4d4d8cbe630c4d711c715aac5c52834352215ef1c"
	Oct 17 20:11:35 old-k8s-version-726816 kubelet[720]: I1017 20:11:35.009685     720 scope.go:117] "RemoveContainer" containerID="f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea"
	Oct 17 20:11:35 old-k8s-version-726816 kubelet[720]: I1017 20:11:35.137601     720 scope.go:117] "RemoveContainer" containerID="f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea"
	Oct 17 20:11:35 old-k8s-version-726816 kubelet[720]: I1017 20:11:35.137854     720 scope.go:117] "RemoveContainer" containerID="3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666"
	Oct 17 20:11:35 old-k8s-version-726816 kubelet[720]: E1017 20:11:35.138237     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lfwq2_kubernetes-dashboard(67f5e328-8b65-4fa4-a45e-40382fe9fed8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2" podUID="67f5e328-8b65-4fa4-a45e-40382fe9fed8"
	Oct 17 20:11:41 old-k8s-version-726816 kubelet[720]: I1017 20:11:41.678204     720 scope.go:117] "RemoveContainer" containerID="3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666"
	Oct 17 20:11:41 old-k8s-version-726816 kubelet[720]: E1017 20:11:41.678471     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lfwq2_kubernetes-dashboard(67f5e328-8b65-4fa4-a45e-40382fe9fed8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2" podUID="67f5e328-8b65-4fa4-a45e-40382fe9fed8"
	Oct 17 20:11:44 old-k8s-version-726816 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:11:44 old-k8s-version-726816 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:11:44 old-k8s-version-726816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 20:11:44 old-k8s-version-726816 systemd[1]: kubelet.service: Consumed 1.551s CPU time.
	
	
	==> kubernetes-dashboard [6fc9076dca48eb3cdde728afd925cc98ddad1f05f397cf21464426ab3aba4eb1] <==
	2025/10/17 20:11:15 Starting overwatch
	2025/10/17 20:11:15 Using namespace: kubernetes-dashboard
	2025/10/17 20:11:15 Using in-cluster config to connect to apiserver
	2025/10/17 20:11:15 Using secret token for csrf signing
	2025/10/17 20:11:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:11:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:11:15 Successful initial request to the apiserver, version: v1.28.0
	2025/10/17 20:11:15 Generating JWE encryption key
	2025/10/17 20:11:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:11:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:11:15 Initializing JWE encryption key from synchronized object
	2025/10/17 20:11:16 Creating in-cluster Sidecar client
	2025/10/17 20:11:16 Serving insecurely on HTTP port: 9090
	2025/10/17 20:11:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:11:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [747137e5be4af0d94b6f109788cf1c1b9bafca36a0e7247a8a3f79cd60d8826b] <==
	I1017 20:11:30.303959       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:11:30.315933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:11:30.315985       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [d366f49e228b9559f5390fd4d4d8cbe630c4d711c715aac5c52834352215ef1c] <==
	I1017 20:10:59.394166       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:11:29.396774       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-726816 -n old-k8s-version-726816
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-726816 -n old-k8s-version-726816: exit status 2 (326.525141ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-726816 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-726816
helpers_test.go:243: (dbg) docker inspect old-k8s-version-726816:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d",
	        "Created": "2025-10-17T20:09:36.13713151Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 365834,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:10:48.649795348Z",
	            "FinishedAt": "2025-10-17T20:10:47.795755739Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/hostname",
	        "HostsPath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/hosts",
	        "LogPath": "/var/lib/docker/containers/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d/5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d-json.log",
	        "Name": "/old-k8s-version-726816",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-726816:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-726816",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5fe53cd658e3085585870313ef9e7ab04e8ed6f6a28ee153e2b6a626e81d544d",
	                "LowerDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5dcb54ae27fdd82c6888e48a7ef95596d62c8f5db714aa4e6a3ed9f11e961e43/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-726816",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-726816/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-726816",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-726816",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-726816",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1eb1619b61aaef7b358e4c292e0071d83beec24bdd94d99b443d0be673341be2",
	            "SandboxKey": "/var/run/docker/netns/1eb1619b61aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-726816": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:73:3c:95:0c:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a2f3c9774d269d6de3a98b72179a7362d7a29c679daa09f837b76252bd896b76",
	                    "EndpointID": "dffde74fb416ead1b5e599083f84261984bf81bf394be56d27d2bde8956567e3",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-726816",
	                        "5fe53cd658e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-726816 -n old-k8s-version-726816
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-726816 -n old-k8s-version-726816: exit status 2 (327.162377ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-726816 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-726816 logs -n 25: (1.176836463s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cilium-684669                                                                                                                                                                                                                              │ cilium-684669             │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p running-upgrade-097245 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                                          │ running-upgrade-097245    │ jenkins │ v1.32.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p force-systemd-env-834947                                                                                                                                                                                                                   │ force-systemd-env-834947  │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p cert-expiration-202048 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-202048    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p missing-upgrade-159057 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-159057    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ stop    │ -p kubernetes-upgrade-660693                                                                                                                                                                                                                  │ kubernetes-upgrade-660693 │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-660693 │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ start   │ -p running-upgrade-097245 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p missing-upgrade-159057                                                                                                                                                                                                                     │ missing-upgrade-159057    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p force-systemd-flag-599050 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p running-upgrade-097245                                                                                                                                                                                                                     │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ ssh     │ force-systemd-flag-599050 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p force-systemd-flag-599050                                                                                                                                                                                                                  │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-726816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p old-k8s-version-726816 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-726816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-449580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p no-preload-449580 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable dashboard -p no-preload-449580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	│ image   │ old-k8s-version-726816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ pause   │ -p old-k8s-version-726816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:11:09
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:11:09.309966  369697 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:11:09.310208  369697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:11:09.310217  369697 out.go:374] Setting ErrFile to fd 2...
	I1017 20:11:09.310220  369697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:11:09.310449  369697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:11:09.310972  369697 out.go:368] Setting JSON to false
	I1017 20:11:09.312229  369697 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6817,"bootTime":1760725052,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:11:09.312334  369697 start.go:141] virtualization: kvm guest
	I1017 20:11:09.314475  369697 out.go:179] * [no-preload-449580] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:11:09.315904  369697 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:11:09.315897  369697 notify.go:220] Checking for updates...
	I1017 20:11:09.317584  369697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:11:09.319369  369697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:11:09.320988  369697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:11:09.322424  369697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:11:09.324061  369697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:11:09.325990  369697 config.go:182] Loaded profile config "no-preload-449580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:11:09.326672  369697 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:11:09.352325  369697 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:11:09.352431  369697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:11:09.414899  369697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:11:09.403433283 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:11:09.415005  369697 docker.go:318] overlay module found
	I1017 20:11:09.418197  369697 out.go:179] * Using the docker driver based on existing profile
	I1017 20:11:09.419596  369697 start.go:305] selected driver: docker
	I1017 20:11:09.419622  369697 start.go:925] validating driver "docker" against &{Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:11:09.419763  369697 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:11:09.420416  369697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:11:09.478589  369697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:11:09.467148832 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:11:09.478931  369697 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:11:09.478962  369697 cni.go:84] Creating CNI manager for ""
	I1017 20:11:09.479055  369697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:11:09.479098  369697 start.go:349] cluster config:
	{Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:11:09.481268  369697 out.go:179] * Starting "no-preload-449580" primary control-plane node in "no-preload-449580" cluster
	I1017 20:11:09.482697  369697 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:11:09.484146  369697 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:11:09.485444  369697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:11:09.485559  369697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:11:09.485580  369697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/config.json ...
	I1017 20:11:09.485881  369697 cache.go:107] acquiring lock: {Name:mkd0df842d4d8da119c6855ae5b215973a1bd054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.485942  369697 cache.go:107] acquiring lock: {Name:mkb1ea73854f03abddddc66ea6d8ff48980053b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.485935  369697 cache.go:107] acquiring lock: {Name:mk495930b32aab4137b78173fcb5d9cf58d8239c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.485991  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1017 20:11:09.485962  369697 cache.go:107] acquiring lock: {Name:mk79978b0094a0a4fe274208f9bd0f469915fa13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.486022  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1017 20:11:09.486034  369697 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 100µs
	I1017 20:11:09.486036  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1017 20:11:09.486049  369697 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 112.078µs
	I1017 20:11:09.486054  369697 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1017 20:11:09.486058  369697 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1017 20:11:09.486005  369697 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.694µs
	I1017 20:11:09.486077  369697 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1017 20:11:09.485881  369697 cache.go:107] acquiring lock: {Name:mk95a64393bf585bd3acb10c28b2e4486b82554a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.486064  369697 cache.go:107] acquiring lock: {Name:mk1e16df1578e3f66034d7e28be03b6ac01b470a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.486093  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1017 20:11:09.486101  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1017 20:11:09.486105  369697 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 245.56µs
	I1017 20:11:09.486104  369697 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 206.043µs
	I1017 20:11:09.486122  369697 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1017 20:11:09.486049  369697 cache.go:107] acquiring lock: {Name:mk47a558c7bfc49677b52c17a6cb39d0217750ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.485891  369697 cache.go:107] acquiring lock: {Name:mk58620b56df75044fc4da2f75d8900d628a7966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.486223  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1017 20:11:09.486127  369697 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1017 20:11:09.486240  369697 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 266.65µs
	I1017 20:11:09.486252  369697 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1017 20:11:09.486223  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1017 20:11:09.486274  369697 cache.go:115] /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1017 20:11:09.486298  369697 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 415.745µs
	I1017 20:11:09.486311  369697 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1017 20:11:09.486273  369697 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 290.72µs
	I1017 20:11:09.486332  369697 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1017 20:11:09.486340  369697 cache.go:87] Successfully saved all images to host disk.
	I1017 20:11:09.507355  369697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:11:09.507377  369697 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:11:09.507395  369697 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:11:09.507421  369697 start.go:360] acquireMachinesLock for no-preload-449580: {Name:mk19bcf32a0d1bfb1bd4e113ba01604af981e85e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:11:09.507474  369697 start.go:364] duration metric: took 37.038µs to acquireMachinesLock for "no-preload-449580"
	I1017 20:11:09.507493  369697 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:11:09.507498  369697 fix.go:54] fixHost starting: 
	I1017 20:11:09.507830  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:09.526695  369697 fix.go:112] recreateIfNeeded on no-preload-449580: state=Stopped err=<nil>
	W1017 20:11:09.526752  369697 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:11:08.515833  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1017 20:11:08.515905  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:08.515972  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:08.544491  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:08.544514  344862 cri.go:89] found id: "924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	I1017 20:11:08.544518  344862 cri.go:89] found id: ""
	I1017 20:11:08.544526  344862 logs.go:282] 2 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709]
	I1017 20:11:08.544576  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:08.548791  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:08.553205  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:08.553280  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:08.581429  344862 cri.go:89] found id: ""
	I1017 20:11:08.581454  344862 logs.go:282] 0 containers: []
	W1017 20:11:08.581462  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:08.581468  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:08.581515  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:08.609680  344862 cri.go:89] found id: ""
	I1017 20:11:08.609715  344862 logs.go:282] 0 containers: []
	W1017 20:11:08.609728  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:08.609755  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:08.609812  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:08.638035  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:08.638061  344862 cri.go:89] found id: ""
	I1017 20:11:08.638071  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:08.638137  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:08.642210  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:08.642287  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:08.670140  344862 cri.go:89] found id: ""
	I1017 20:11:08.670167  344862 logs.go:282] 0 containers: []
	W1017 20:11:08.670178  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:08.670189  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:08.670256  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:08.699173  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:08.699201  344862 cri.go:89] found id: "a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc"
	I1017 20:11:08.699206  344862 cri.go:89] found id: ""
	I1017 20:11:08.699214  344862 logs.go:282] 2 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2 a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc]
	I1017 20:11:08.699262  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:08.703348  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:08.707502  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:08.707576  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:08.739927  344862 cri.go:89] found id: ""
	I1017 20:11:08.739960  344862 logs.go:282] 0 containers: []
	W1017 20:11:08.739973  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:08.739980  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:08.740045  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:08.772763  344862 cri.go:89] found id: ""
	I1017 20:11:08.772793  344862 logs.go:282] 0 containers: []
	W1017 20:11:08.772803  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:08.772821  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:08.772836  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:08.822890  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:08.822931  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:08.858423  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:08.858454  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:11:08.946461  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:08.946503  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:08.871511  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:11.374176  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	I1017 20:11:09.529089  369697 out.go:252] * Restarting existing docker container for "no-preload-449580" ...
	I1017 20:11:09.529197  369697 cli_runner.go:164] Run: docker start no-preload-449580
	I1017 20:11:09.784940  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:09.807039  369697 kic.go:430] container "no-preload-449580" state is running.
	I1017 20:11:09.807422  369697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-449580
	I1017 20:11:09.827219  369697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/config.json ...
	I1017 20:11:09.827497  369697 machine.go:93] provisionDockerMachine start ...
	I1017 20:11:09.827582  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:09.847150  369697 main.go:141] libmachine: Using SSH client type: native
	I1017 20:11:09.847413  369697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 20:11:09.847427  369697 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:11:09.848075  369697 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39148->127.0.0.1:33184: read: connection reset by peer
	I1017 20:11:13.005010  369697 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-449580
	
	I1017 20:11:13.005039  369697 ubuntu.go:182] provisioning hostname "no-preload-449580"
	I1017 20:11:13.005126  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:13.029495  369697 main.go:141] libmachine: Using SSH client type: native
	I1017 20:11:13.029829  369697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 20:11:13.029866  369697 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-449580 && echo "no-preload-449580" | sudo tee /etc/hostname
	I1017 20:11:13.197081  369697 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-449580
	
	I1017 20:11:13.197191  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:13.220534  369697 main.go:141] libmachine: Using SSH client type: native
	I1017 20:11:13.220907  369697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 20:11:13.220933  369697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-449580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-449580/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-449580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:11:13.371904  369697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:11:13.371937  369697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:11:13.371972  369697 ubuntu.go:190] setting up certificates
	I1017 20:11:13.371983  369697 provision.go:84] configureAuth start
	I1017 20:11:13.372088  369697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-449580
	I1017 20:11:13.391815  369697 provision.go:143] copyHostCerts
	I1017 20:11:13.391885  369697 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:11:13.391902  369697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:11:13.391979  369697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:11:13.393129  369697 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:11:13.393150  369697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:11:13.393192  369697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:11:13.393267  369697 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:11:13.393286  369697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:11:13.393314  369697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:11:13.393365  369697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.no-preload-449580 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-449580]
	I1017 20:11:13.744061  369697 provision.go:177] copyRemoteCerts
	I1017 20:11:13.744140  369697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:11:13.744188  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:13.766722  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:13.875513  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:11:13.900515  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:11:13.924394  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:11:13.948134  369697 provision.go:87] duration metric: took 576.13484ms to configureAuth
	I1017 20:11:13.948164  369697 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:11:13.948396  369697 config.go:182] Loaded profile config "no-preload-449580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:11:13.948515  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:13.971558  369697 main.go:141] libmachine: Using SSH client type: native
	I1017 20:11:13.971916  369697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 20:11:13.971945  369697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:11:14.455814  369697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:11:14.455843  369697 machine.go:96] duration metric: took 4.628324875s to provisionDockerMachine
	I1017 20:11:14.455858  369697 start.go:293] postStartSetup for "no-preload-449580" (driver="docker")
	I1017 20:11:14.455871  369697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:11:14.455943  369697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:11:14.456014  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:14.478334  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:14.589292  369697 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:11:14.595110  369697 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:11:14.595148  369697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:11:14.595164  369697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:11:14.595236  369697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:11:14.595362  369697 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:11:14.595507  369697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:11:14.607816  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:11:14.634414  369697 start.go:296] duration metric: took 178.536291ms for postStartSetup
	I1017 20:11:14.634531  369697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:11:14.634583  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:14.658637  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:14.766088  369697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:11:14.773258  369697 fix.go:56] duration metric: took 5.265741716s for fixHost
	I1017 20:11:14.773296  369697 start.go:83] releasing machines lock for "no-preload-449580", held for 5.265809991s
	I1017 20:11:14.773375  369697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-449580
	I1017 20:11:14.796569  369697 ssh_runner.go:195] Run: cat /version.json
	I1017 20:11:14.796623  369697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:11:14.796628  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:14.796703  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:14.820095  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:14.820652  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:15.005027  369697 ssh_runner.go:195] Run: systemctl --version
	I1017 20:11:15.014767  369697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:11:15.063813  369697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:11:15.070784  369697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:11:15.070859  369697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:11:15.082291  369697 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:11:15.082319  369697 start.go:495] detecting cgroup driver to use...
	I1017 20:11:15.082356  369697 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:11:15.082404  369697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:11:15.105262  369697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:11:15.123903  369697 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:11:15.123964  369697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:11:15.146167  369697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:11:15.164332  369697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:11:15.275997  369697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:11:15.397994  369697 docker.go:234] disabling docker service ...
	I1017 20:11:15.398071  369697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:11:15.420570  369697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:11:15.438073  369697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:11:15.563348  369697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:11:15.662806  369697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:11:15.676940  369697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:11:15.693605  369697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:11:15.693675  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.703993  369697 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:11:15.704123  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.716079  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.726038  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.737273  369697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:11:15.748257  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.759653  369697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.771820  369697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:11:15.785001  369697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:11:15.794697  369697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:11:15.804198  369697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:11:15.921518  369697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:11:16.801402  369697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:11:16.801505  369697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:11:16.807082  369697 start.go:563] Will wait 60s for crictl version
	I1017 20:11:16.807155  369697 ssh_runner.go:195] Run: which crictl
	I1017 20:11:16.812773  369697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:11:16.845085  369697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:11:16.845171  369697 ssh_runner.go:195] Run: crio --version
	I1017 20:11:16.884182  369697 ssh_runner.go:195] Run: crio --version
	I1017 20:11:16.929915  369697 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:11:16.931726  369697 cli_runner.go:164] Run: docker network inspect no-preload-449580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:11:16.953119  369697 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1017 20:11:16.958179  369697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:11:16.972334  369697 kubeadm.go:883] updating cluster {Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:11:16.972487  369697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:11:16.972532  369697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:11:17.018057  369697 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:11:17.018081  369697 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:11:17.018089  369697 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1017 20:11:17.018198  369697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-449580 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:11:17.018298  369697 ssh_runner.go:195] Run: crio config
	I1017 20:11:17.082825  369697 cni.go:84] Creating CNI manager for ""
	I1017 20:11:17.082853  369697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:11:17.082875  369697 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:11:17.082908  369697 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-449580 NodeName:no-preload-449580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:11:17.083064  369697 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-449580"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:11:17.083146  369697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:11:17.094587  369697 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:11:17.094684  369697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:11:17.105420  369697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 20:11:17.124560  369697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:11:17.143255  369697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1017 20:11:17.160975  369697 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:11:17.166050  369697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:11:17.178161  369697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:11:17.271526  369697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:11:17.295806  369697 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580 for IP: 192.168.103.2
	I1017 20:11:17.295832  369697 certs.go:195] generating shared ca certs ...
	I1017 20:11:17.295853  369697 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:11:17.296045  369697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:11:17.296127  369697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:11:17.296145  369697 certs.go:257] generating profile certs ...
	I1017 20:11:17.296247  369697 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.key
	I1017 20:11:17.296322  369697 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.key.15dab988
	I1017 20:11:17.296382  369697 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.key
	I1017 20:11:17.296528  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:11:17.296563  369697 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:11:17.296576  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:11:17.296600  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:11:17.296621  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:11:17.296641  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:11:17.296693  369697 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:11:17.297547  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:11:17.323192  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:11:17.348271  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:11:17.376073  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:11:17.406389  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 20:11:17.431556  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:11:17.456803  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:11:17.481444  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:11:17.506379  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:11:17.533328  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:11:17.558369  369697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:11:17.584046  369697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:11:17.602520  369697 ssh_runner.go:195] Run: openssl version
	I1017 20:11:17.611281  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:11:17.623462  369697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:11:17.629201  369697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:11:17.629289  369697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:11:17.681277  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:11:17.692953  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:11:17.706377  369697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:11:17.712206  369697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:11:17.712285  369697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:11:17.769117  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:11:17.779990  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:11:17.792341  369697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:11:17.798211  369697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:11:17.798273  369697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:11:17.854590  369697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:11:17.866662  369697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:11:17.874074  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:11:17.929263  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:11:17.995850  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:11:18.048085  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:11:18.092471  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:11:18.136040  369697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:11:18.181322  369697 kubeadm.go:400] StartCluster: {Name:no-preload-449580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-449580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:11:18.181432  369697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:11:18.181514  369697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:11:18.218072  369697 cri.go:89] found id: "344d142d37fe5e0cf83f172832d2f0380baafcfe5af95563d75af080c8f38c3c"
	I1017 20:11:18.218101  369697 cri.go:89] found id: "6cf770e38746c4716bb308f95e151bdd97000b0a2142f8c26a0763b88060594f"
	I1017 20:11:18.218107  369697 cri.go:89] found id: "09d3164355d524c8b81db0b45da6184b8608f2453c76034f04243ff5a2366382"
	I1017 20:11:18.218111  369697 cri.go:89] found id: "da4d6ced5b128794ebcf1eb3fba8085c8b428be8cc20e7b0cbbeb23351ceb4d4"
	I1017 20:11:18.218115  369697 cri.go:89] found id: ""
	I1017 20:11:18.218169  369697 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:11:18.232876  369697 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:11:18Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:11:18.232963  369697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:11:18.243350  369697 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:11:18.243372  369697 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:11:18.243425  369697 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:11:18.252041  369697 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:11:18.253052  369697 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-449580" does not appear in /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:11:18.253650  369697 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-135723/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-449580" cluster setting kubeconfig missing "no-preload-449580" context setting]
	I1017 20:11:18.254694  369697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:11:18.256699  369697 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:11:18.265849  369697 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1017 20:11:18.265892  369697 kubeadm.go:601] duration metric: took 22.513294ms to restartPrimaryControlPlane
	I1017 20:11:18.265904  369697 kubeadm.go:402] duration metric: took 84.595638ms to StartCluster
	I1017 20:11:18.265935  369697 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:11:18.266007  369697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:11:18.267783  369697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:11:18.268056  369697 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:11:18.268111  369697 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:11:18.268260  369697 config.go:182] Loaded profile config "no-preload-449580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:11:18.268313  369697 addons.go:69] Setting default-storageclass=true in profile "no-preload-449580"
	I1017 20:11:18.268312  369697 addons.go:69] Setting dashboard=true in profile "no-preload-449580"
	I1017 20:11:18.268336  369697 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-449580"
	I1017 20:11:18.268341  369697 addons.go:238] Setting addon dashboard=true in "no-preload-449580"
	W1017 20:11:18.268350  369697 addons.go:247] addon dashboard should already be in state true
	I1017 20:11:18.268380  369697 host.go:66] Checking if "no-preload-449580" exists ...
	I1017 20:11:18.268542  369697 addons.go:69] Setting storage-provisioner=true in profile "no-preload-449580"
	I1017 20:11:18.268573  369697 addons.go:238] Setting addon storage-provisioner=true in "no-preload-449580"
	W1017 20:11:18.268589  369697 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:11:18.268622  369697 host.go:66] Checking if "no-preload-449580" exists ...
	I1017 20:11:18.268659  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:18.269151  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:18.269453  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:18.271876  369697 out.go:179] * Verifying Kubernetes components...
	I1017 20:11:18.273577  369697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:11:18.301350  369697 addons.go:238] Setting addon default-storageclass=true in "no-preload-449580"
	W1017 20:11:18.301375  369697 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:11:18.301403  369697 host.go:66] Checking if "no-preload-449580" exists ...
	I1017 20:11:18.301856  369697 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:11:18.302662  369697 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 20:11:18.304361  369697 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 20:11:18.305887  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 20:11:18.305908  369697 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 20:11:18.305968  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:18.307997  369697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1017 20:11:13.872483  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:16.371852  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:18.374401  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	I1017 20:11:18.310061  369697 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:11:18.310083  369697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:11:18.310144  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:18.337046  369697 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:11:18.337083  369697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:11:18.337146  369697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:11:18.344242  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:18.344915  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:18.364268  369697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:11:18.439288  369697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:11:18.453620  369697 node_ready.go:35] waiting up to 6m0s for node "no-preload-449580" to be "Ready" ...
	I1017 20:11:18.471551  369697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:11:18.472160  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 20:11:18.472184  369697 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 20:11:18.487614  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 20:11:18.487642  369697 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 20:11:18.500967  369697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:11:18.504449  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 20:11:18.504476  369697 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 20:11:18.527921  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 20:11:18.527947  369697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 20:11:18.549141  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 20:11:18.549166  369697 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 20:11:18.564652  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 20:11:18.564681  369697 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 20:11:18.583489  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 20:11:18.583522  369697 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 20:11:18.598664  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 20:11:18.598689  369697 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 20:11:18.614079  369697 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:11:18.614110  369697 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 20:11:18.629544  369697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:11:19.574263  369697 node_ready.go:49] node "no-preload-449580" is "Ready"
	I1017 20:11:19.574305  369697 node_ready.go:38] duration metric: took 1.120634369s for node "no-preload-449580" to be "Ready" ...
	I1017 20:11:19.574329  369697 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:11:19.574421  369697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:11:20.094382  369697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.622795832s)
	I1017 20:11:20.094444  369697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.593451666s)
	I1017 20:11:20.094799  369697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.465209602s)
	I1017 20:11:20.095081  369697 api_server.go:72] duration metric: took 1.826985712s to wait for apiserver process to appear ...
	I1017 20:11:20.095103  369697 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:11:20.095125  369697 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:11:20.097141  369697 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-449580 addons enable metrics-server
	
	I1017 20:11:20.100673  369697 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:11:20.100703  369697 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:11:20.105694  369697 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1017 20:11:19.010263  344862 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.063734586s)
	W1017 20:11:19.010317  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1017 20:11:19.010335  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:19.010348  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:19.075018  344862 logs.go:123] Gathering logs for kube-controller-manager [a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc] ...
	I1017 20:11:19.075112  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc"
	I1017 20:11:19.111254  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:19.111334  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:19.139790  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:19.139834  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:19.189730  344862 logs.go:123] Gathering logs for kube-apiserver [924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709] ...
	I1017 20:11:19.189786  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	I1017 20:11:19.223756  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:19.223793  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:21.757248  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1017 20:11:20.871255  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:23.370881  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	I1017 20:11:20.108700  369697 addons.go:514] duration metric: took 1.840585262s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 20:11:20.595899  369697 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:11:20.600468  369697 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:11:20.600501  369697 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:11:21.095933  369697 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:11:21.100871  369697 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 20:11:21.101907  369697 api_server.go:141] control plane version: v1.34.1
	I1017 20:11:21.101931  369697 api_server.go:131] duration metric: took 1.006820268s to wait for apiserver health ...
	I1017 20:11:21.101939  369697 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:11:21.106206  369697 system_pods.go:59] 8 kube-system pods found
	I1017 20:11:21.106249  369697 system_pods.go:61] "coredns-66bc5c9577-p4n86" [617d6937-5180-4329-853d-32a9b1c9f510] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:11:21.106260  369697 system_pods.go:61] "etcd-no-preload-449580" [fb200953-462a-4d0e-a897-8503ebe3a57f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:11:21.106271  369697 system_pods.go:61] "kindnet-9xg9h" [673bfee2-dc28-4a9a-815e-0f57d9dd92f8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 20:11:21.106285  369697 system_pods.go:61] "kube-apiserver-no-preload-449580" [4b67f8cf-2d87-4f26-9c70-08870061761a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:11:21.106298  369697 system_pods.go:61] "kube-controller-manager-no-preload-449580" [f1bb561c-bd36-440a-a61e-bae20669a3d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:11:21.106310  369697 system_pods.go:61] "kube-proxy-m5g7f" [b0d544c6-f6c2-459c-93b9-22452c8a77d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:11:21.106320  369697 system_pods.go:61] "kube-scheduler-no-preload-449580" [2f387b59-7741-4394-8cdd-791ef636b645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:11:21.106332  369697 system_pods.go:61] "storage-provisioner" [53d908ca-46ee-49bd-9de8-af09045721ef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:11:21.106343  369697 system_pods.go:74] duration metric: took 4.396853ms to wait for pod list to return data ...
	I1017 20:11:21.106360  369697 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:11:21.109083  369697 default_sa.go:45] found service account: "default"
	I1017 20:11:21.109107  369697 default_sa.go:55] duration metric: took 2.740469ms for default service account to be created ...
	I1017 20:11:21.109119  369697 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:11:21.112010  369697 system_pods.go:86] 8 kube-system pods found
	I1017 20:11:21.112041  369697 system_pods.go:89] "coredns-66bc5c9577-p4n86" [617d6937-5180-4329-853d-32a9b1c9f510] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:11:21.112049  369697 system_pods.go:89] "etcd-no-preload-449580" [fb200953-462a-4d0e-a897-8503ebe3a57f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:11:21.112058  369697 system_pods.go:89] "kindnet-9xg9h" [673bfee2-dc28-4a9a-815e-0f57d9dd92f8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 20:11:21.112066  369697 system_pods.go:89] "kube-apiserver-no-preload-449580" [4b67f8cf-2d87-4f26-9c70-08870061761a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:11:21.112072  369697 system_pods.go:89] "kube-controller-manager-no-preload-449580" [f1bb561c-bd36-440a-a61e-bae20669a3d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:11:21.112078  369697 system_pods.go:89] "kube-proxy-m5g7f" [b0d544c6-f6c2-459c-93b9-22452c8a77d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:11:21.112086  369697 system_pods.go:89] "kube-scheduler-no-preload-449580" [2f387b59-7741-4394-8cdd-791ef636b645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:11:21.112102  369697 system_pods.go:89] "storage-provisioner" [53d908ca-46ee-49bd-9de8-af09045721ef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:11:21.112113  369697 system_pods.go:126] duration metric: took 2.987402ms to wait for k8s-apps to be running ...
	I1017 20:11:21.112123  369697 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:11:21.112170  369697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:11:21.126489  369697 system_svc.go:56] duration metric: took 14.352119ms WaitForService to wait for kubelet
	I1017 20:11:21.126520  369697 kubeadm.go:586] duration metric: took 2.858428752s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:11:21.126538  369697 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:11:21.130113  369697 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:11:21.130152  369697 node_conditions.go:123] node cpu capacity is 8
	I1017 20:11:21.130170  369697 node_conditions.go:105] duration metric: took 3.625938ms to run NodePressure ...
	I1017 20:11:21.130187  369697 start.go:241] waiting for startup goroutines ...
	I1017 20:11:21.130197  369697 start.go:246] waiting for cluster config update ...
	I1017 20:11:21.130212  369697 start.go:255] writing updated cluster config ...
	I1017 20:11:21.130573  369697 ssh_runner.go:195] Run: rm -f paused
	I1017 20:11:21.135111  369697 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:11:21.139554  369697 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p4n86" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:11:23.144945  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	I1017 20:11:23.633422  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:56782->192.168.76.2:8443: read: connection reset by peer
	I1017 20:11:23.633499  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:23.633558  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:23.664997  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:23.665022  344862 cri.go:89] found id: "924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	I1017 20:11:23.665027  344862 cri.go:89] found id: ""
	I1017 20:11:23.665036  344862 logs.go:282] 2 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709]
	I1017 20:11:23.665106  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:23.669764  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:23.673631  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:23.673703  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:23.700454  344862 cri.go:89] found id: ""
	I1017 20:11:23.700480  344862 logs.go:282] 0 containers: []
	W1017 20:11:23.700487  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:23.700493  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:23.700538  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:23.730521  344862 cri.go:89] found id: ""
	I1017 20:11:23.730546  344862 logs.go:282] 0 containers: []
	W1017 20:11:23.730554  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:23.730560  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:23.730606  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:23.758499  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:23.758525  344862 cri.go:89] found id: ""
	I1017 20:11:23.758534  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:23.758596  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:23.762703  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:23.762798  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:23.790773  344862 cri.go:89] found id: ""
	I1017 20:11:23.790803  344862 logs.go:282] 0 containers: []
	W1017 20:11:23.790815  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:23.790823  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:23.790889  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:23.824954  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:23.824998  344862 cri.go:89] found id: "a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc"
	I1017 20:11:23.825004  344862 cri.go:89] found id: ""
	I1017 20:11:23.825014  344862 logs.go:282] 2 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2 a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc]
	I1017 20:11:23.825081  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:23.829632  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:23.834349  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:23.834409  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:23.865416  344862 cri.go:89] found id: ""
	I1017 20:11:23.865448  344862 logs.go:282] 0 containers: []
	W1017 20:11:23.865459  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:23.865467  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:23.865531  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:23.899096  344862 cri.go:89] found id: ""
	I1017 20:11:23.899138  344862 logs.go:282] 0 containers: []
	W1017 20:11:23.899150  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:23.899171  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:23.899187  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:11:24.005852  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:24.005898  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:24.073556  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:24.073595  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:24.073618  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:24.120002  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:24.120047  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:24.159964  344862 logs.go:123] Gathering logs for kube-controller-manager [a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc] ...
	I1017 20:11:24.160001  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a498c39c61817b1dc310ac097cec7a185f03c975c7c32e9332cb78be258e95dc"
	I1017 20:11:24.198661  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:24.198699  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:24.227522  344862 logs.go:123] Gathering logs for kube-apiserver [924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709] ...
	I1017 20:11:24.227562  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	W1017 20:11:24.262931  344862 logs.go:130] failed kube-apiserver [924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709": Process exited with status 1
	stdout:
	
	stderr:
	E1017 20:11:24.259824    3598 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709\": container with ID starting with 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709 not found: ID does not exist" containerID="924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	time="2025-10-17T20:11:24Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709\": container with ID starting with 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1017 20:11:24.259824    3598 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709\": container with ID starting with 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709 not found: ID does not exist" containerID="924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709"
	time="2025-10-17T20:11:24Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709\": container with ID starting with 924cfa4cc3a70c92d38e3a00be69530b0f9553e19c994c2ca886c0666b648709 not found: ID does not exist"
	
	** /stderr **
	I1017 20:11:24.262961  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:24.262979  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:24.337101  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:24.337148  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:24.407972  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:24.408014  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:26.956886  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:26.957441  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:26.957511  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:26.957570  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:26.986878  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:26.986907  344862 cri.go:89] found id: ""
	I1017 20:11:26.986919  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:26.986983  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:26.991365  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:26.991439  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:27.020189  344862 cri.go:89] found id: ""
	I1017 20:11:27.020222  344862 logs.go:282] 0 containers: []
	W1017 20:11:27.020235  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:27.020242  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:27.020300  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:27.048308  344862 cri.go:89] found id: ""
	I1017 20:11:27.048340  344862 logs.go:282] 0 containers: []
	W1017 20:11:27.048353  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:27.048361  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:27.048423  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:27.078305  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:27.078325  344862 cri.go:89] found id: ""
	I1017 20:11:27.078333  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:27.078385  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:27.082965  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:27.083035  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:27.115195  344862 cri.go:89] found id: ""
	I1017 20:11:27.115222  344862 logs.go:282] 0 containers: []
	W1017 20:11:27.115230  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:27.115237  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:27.115304  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:27.145191  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:27.145217  344862 cri.go:89] found id: ""
	I1017 20:11:27.145228  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:27.145292  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:27.150380  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:27.150451  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:27.183023  344862 cri.go:89] found id: ""
	I1017 20:11:27.183058  344862 logs.go:282] 0 containers: []
	W1017 20:11:27.183069  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:27.183078  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:27.183142  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:27.215624  344862 cri.go:89] found id: ""
	I1017 20:11:27.215656  344862 logs.go:282] 0 containers: []
	W1017 20:11:27.215667  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:27.215678  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:27.215693  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:27.285546  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:27.285571  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:27.285591  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:27.327286  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:27.327327  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:27.395895  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:27.395939  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:27.431183  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:27.431212  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:27.482825  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:27.482870  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:27.519037  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:27.519071  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1017 20:11:25.371208  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:27.372251  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	W1017 20:11:25.145459  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	W1017 20:11:27.146492  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	W1017 20:11:29.372307  365613 pod_ready.go:104] pod "coredns-5dd5756b68-xrnvz" is not "Ready", error: <nil>
	I1017 20:11:30.870784  365613 pod_ready.go:94] pod "coredns-5dd5756b68-xrnvz" is "Ready"
	I1017 20:11:30.870813  365613 pod_ready.go:86] duration metric: took 31.005781208s for pod "coredns-5dd5756b68-xrnvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:30.873886  365613 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:30.878830  365613 pod_ready.go:94] pod "etcd-old-k8s-version-726816" is "Ready"
	I1017 20:11:30.878859  365613 pod_ready.go:86] duration metric: took 4.941209ms for pod "etcd-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:30.881855  365613 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:30.886549  365613 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-726816" is "Ready"
	I1017 20:11:30.886575  365613 pod_ready.go:86] duration metric: took 4.69699ms for pod "kube-apiserver-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:30.889440  365613 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:31.069368  365613 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-726816" is "Ready"
	I1017 20:11:31.069394  365613 pod_ready.go:86] duration metric: took 179.926258ms for pod "kube-controller-manager-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:31.269156  365613 pod_ready.go:83] waiting for pod "kube-proxy-xp229" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:31.668596  365613 pod_ready.go:94] pod "kube-proxy-xp229" is "Ready"
	I1017 20:11:31.668627  365613 pod_ready.go:86] duration metric: took 399.446544ms for pod "kube-proxy-xp229" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:31.869279  365613 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:32.269164  365613 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-726816" is "Ready"
	I1017 20:11:32.269191  365613 pod_ready.go:86] duration metric: took 399.890288ms for pod "kube-scheduler-old-k8s-version-726816" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:11:32.269203  365613 pod_ready.go:40] duration metric: took 32.408503539s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:11:32.315370  365613 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1017 20:11:32.319628  365613 out.go:203] 
	W1017 20:11:32.321206  365613 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1017 20:11:32.322545  365613 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1017 20:11:32.324306  365613 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-726816" cluster and "default" namespace by default
	I1017 20:11:27.625085  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:27.625121  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:30.150840  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:30.152251  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:30.152326  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:30.152389  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:30.190765  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:30.190796  344862 cri.go:89] found id: ""
	I1017 20:11:30.190807  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:30.190871  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:30.198196  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:30.198278  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:30.235078  344862 cri.go:89] found id: ""
	I1017 20:11:30.235110  344862 logs.go:282] 0 containers: []
	W1017 20:11:30.235122  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:30.235130  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:30.235198  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:30.266189  344862 cri.go:89] found id: ""
	I1017 20:11:30.266222  344862 logs.go:282] 0 containers: []
	W1017 20:11:30.266236  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:30.266245  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:30.266296  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:30.302098  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:30.302130  344862 cri.go:89] found id: ""
	I1017 20:11:30.302144  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:30.302212  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:30.306947  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:30.307024  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:30.347793  344862 cri.go:89] found id: ""
	I1017 20:11:30.347822  344862 logs.go:282] 0 containers: []
	W1017 20:11:30.347831  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:30.347837  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:30.347884  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:30.378723  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:30.378760  344862 cri.go:89] found id: ""
	I1017 20:11:30.378771  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:30.378834  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:30.383616  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:30.383689  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:30.416596  344862 cri.go:89] found id: ""
	I1017 20:11:30.416628  344862 logs.go:282] 0 containers: []
	W1017 20:11:30.416638  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:30.416645  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:30.416695  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:30.449864  344862 cri.go:89] found id: ""
	I1017 20:11:30.449902  344862 logs.go:282] 0 containers: []
	W1017 20:11:30.449915  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:30.449928  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:30.449970  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:30.484041  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:30.484091  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:11:30.580399  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:30.580440  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:30.601901  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:30.601943  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:30.673586  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:30.673615  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:30.673638  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:30.711383  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:30.711427  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:30.771567  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:30.771605  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:30.802475  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:30.802507  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1017 20:11:29.646264  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	W1017 20:11:32.145470  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	I1017 20:11:33.350633  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:33.351164  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:33.351221  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:33.351291  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:33.380099  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:33.380127  344862 cri.go:89] found id: ""
	I1017 20:11:33.380136  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:33.380194  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:33.384561  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:33.384621  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:33.413184  344862 cri.go:89] found id: ""
	I1017 20:11:33.413216  344862 logs.go:282] 0 containers: []
	W1017 20:11:33.413225  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:33.413231  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:33.413279  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:33.442873  344862 cri.go:89] found id: ""
	I1017 20:11:33.442902  344862 logs.go:282] 0 containers: []
	W1017 20:11:33.442910  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:33.442917  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:33.442970  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:33.471895  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:33.471919  344862 cri.go:89] found id: ""
	I1017 20:11:33.471929  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:33.471988  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:33.476614  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:33.476689  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:33.505550  344862 cri.go:89] found id: ""
	I1017 20:11:33.505580  344862 logs.go:282] 0 containers: []
	W1017 20:11:33.505591  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:33.505600  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:33.505668  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:33.534791  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:33.534817  344862 cri.go:89] found id: ""
	I1017 20:11:33.534832  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:33.534892  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:33.539320  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:33.539401  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:33.568534  344862 cri.go:89] found id: ""
	I1017 20:11:33.568559  344862 logs.go:282] 0 containers: []
	W1017 20:11:33.568577  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:33.568586  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:33.568640  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:33.595965  344862 cri.go:89] found id: ""
	I1017 20:11:33.595998  344862 logs.go:282] 0 containers: []
	W1017 20:11:33.596015  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:33.596027  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:33.596043  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:33.629984  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:33.630025  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:11:33.722344  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:33.722387  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:33.743220  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:33.743266  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:33.804264  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:33.804305  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:33.804319  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:33.836756  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:33.836796  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:33.890315  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:33.890368  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:33.918544  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:33.918572  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:36.467882  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:36.468400  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:36.468463  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:36.468517  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:36.497657  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:36.497679  344862 cri.go:89] found id: ""
	I1017 20:11:36.497689  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:36.497765  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:36.501932  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:36.502005  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:36.532047  344862 cri.go:89] found id: ""
	I1017 20:11:36.532092  344862 logs.go:282] 0 containers: []
	W1017 20:11:36.532103  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:36.532111  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:36.532172  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:36.563659  344862 cri.go:89] found id: ""
	I1017 20:11:36.563686  344862 logs.go:282] 0 containers: []
	W1017 20:11:36.563694  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:36.563701  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:36.563781  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:36.595006  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:36.595053  344862 cri.go:89] found id: ""
	I1017 20:11:36.595062  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:36.595109  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:36.599182  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:36.599263  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:36.626774  344862 cri.go:89] found id: ""
	I1017 20:11:36.626805  344862 logs.go:282] 0 containers: []
	W1017 20:11:36.626815  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:36.626824  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:36.626887  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:36.657682  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:36.657708  344862 cri.go:89] found id: ""
	I1017 20:11:36.657717  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:36.657788  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:36.662266  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:36.662349  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:36.691141  344862 cri.go:89] found id: ""
	I1017 20:11:36.691172  344862 logs.go:282] 0 containers: []
	W1017 20:11:36.691182  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:36.691190  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:36.691250  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:36.719684  344862 cri.go:89] found id: ""
	I1017 20:11:36.719709  344862 logs.go:282] 0 containers: []
	W1017 20:11:36.719717  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:36.719725  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:36.719770  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:36.773604  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:36.773642  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:36.801603  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:36.801632  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:36.850685  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:36.850725  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:36.882915  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:36.882946  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:11:36.974120  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:36.974159  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:36.993593  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:36.993641  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:37.053479  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:37.053502  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:37.053515  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	W1017 20:11:34.645858  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	W1017 20:11:37.147584  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	I1017 20:11:39.587830  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:39.588401  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:39.588463  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:39.588525  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:39.619000  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:39.619023  344862 cri.go:89] found id: ""
	I1017 20:11:39.619031  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:39.619079  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:39.623155  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:39.623241  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:39.651366  344862 cri.go:89] found id: ""
	I1017 20:11:39.651397  344862 logs.go:282] 0 containers: []
	W1017 20:11:39.651409  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:39.651416  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:39.651477  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:39.681335  344862 cri.go:89] found id: ""
	I1017 20:11:39.681358  344862 logs.go:282] 0 containers: []
	W1017 20:11:39.681365  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:39.681373  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:39.681420  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:39.710507  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:39.710534  344862 cri.go:89] found id: ""
	I1017 20:11:39.710544  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:39.710605  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:39.714719  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:39.714811  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:39.741277  344862 cri.go:89] found id: ""
	I1017 20:11:39.741301  344862 logs.go:282] 0 containers: []
	W1017 20:11:39.741313  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:39.741319  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:39.741366  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:39.769983  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:39.770006  344862 cri.go:89] found id: ""
	I1017 20:11:39.770017  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:39.770085  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:39.774236  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:39.774314  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:39.802649  344862 cri.go:89] found id: ""
	I1017 20:11:39.802681  344862 logs.go:282] 0 containers: []
	W1017 20:11:39.802693  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:39.802701  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:39.802786  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:39.831772  344862 cri.go:89] found id: ""
	I1017 20:11:39.831804  344862 logs.go:282] 0 containers: []
	W1017 20:11:39.831811  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:39.831822  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:39.831840  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:39.851041  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:39.851078  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:39.909691  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:39.909714  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:39.909749  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:39.942836  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:39.942871  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:39.995252  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:39.995291  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:40.025473  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:40.025502  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:40.073327  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:40.073367  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:40.104877  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:40.104913  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1017 20:11:39.645160  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	W1017 20:11:42.145515  369697 pod_ready.go:104] pod "coredns-66bc5c9577-p4n86" is not "Ready", error: <nil>
	I1017 20:11:42.699098  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:42.699565  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:42.699617  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:42.699666  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:42.728841  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:42.728871  344862 cri.go:89] found id: ""
	I1017 20:11:42.728883  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:42.728939  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:42.733520  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:42.733587  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:42.761412  344862 cri.go:89] found id: ""
	I1017 20:11:42.761444  344862 logs.go:282] 0 containers: []
	W1017 20:11:42.761456  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:42.761465  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:42.761524  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:42.789280  344862 cri.go:89] found id: ""
	I1017 20:11:42.789307  344862 logs.go:282] 0 containers: []
	W1017 20:11:42.789318  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:42.789326  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:42.789387  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:42.816894  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:42.816919  344862 cri.go:89] found id: ""
	I1017 20:11:42.816930  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:42.816993  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:42.821191  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:42.821283  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:42.850416  344862 cri.go:89] found id: ""
	I1017 20:11:42.850447  344862 logs.go:282] 0 containers: []
	W1017 20:11:42.850458  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:42.850467  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:42.850522  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:42.878244  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:42.878269  344862 cri.go:89] found id: ""
	I1017 20:11:42.878279  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:42.878336  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:42.882412  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:42.882482  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:42.911425  344862 cri.go:89] found id: ""
	I1017 20:11:42.911456  344862 logs.go:282] 0 containers: []
	W1017 20:11:42.911467  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:42.911475  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:42.911537  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:42.939062  344862 cri.go:89] found id: ""
	I1017 20:11:42.939101  344862 logs.go:282] 0 containers: []
	W1017 20:11:42.939110  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:42.939119  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:42.939133  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:11:43.028882  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:43.028923  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:43.047754  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:43.047787  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:43.106880  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:43.106908  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:43.106927  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:43.142390  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:43.142434  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:43.198152  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:43.198191  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:43.226916  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:43.226945  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:43.273398  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:43.273437  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:45.805307  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:11:45.805815  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:11:45.805878  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:11:45.805937  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:11:45.836652  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:45.836677  344862 cri.go:89] found id: ""
	I1017 20:11:45.836688  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:11:45.836782  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:45.841263  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:11:45.841358  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:11:45.870422  344862 cri.go:89] found id: ""
	I1017 20:11:45.870454  344862 logs.go:282] 0 containers: []
	W1017 20:11:45.870465  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:11:45.870472  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:11:45.870527  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:11:45.898750  344862 cri.go:89] found id: ""
	I1017 20:11:45.898784  344862 logs.go:282] 0 containers: []
	W1017 20:11:45.898795  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:11:45.898803  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:11:45.898865  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:11:45.927595  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:45.927620  344862 cri.go:89] found id: ""
	I1017 20:11:45.927632  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:11:45.927688  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:45.931547  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:11:45.931626  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:11:45.963013  344862 cri.go:89] found id: ""
	I1017 20:11:45.963046  344862 logs.go:282] 0 containers: []
	W1017 20:11:45.963057  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:11:45.963065  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:11:45.963127  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:11:45.997687  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:45.997715  344862 cri.go:89] found id: ""
	I1017 20:11:45.997727  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:11:45.997832  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:11:46.002413  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:11:46.002495  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:11:46.033720  344862 cri.go:89] found id: ""
	I1017 20:11:46.033763  344862 logs.go:282] 0 containers: []
	W1017 20:11:46.033775  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:11:46.033783  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:11:46.033846  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:11:46.064158  344862 cri.go:89] found id: ""
	I1017 20:11:46.064186  344862 logs.go:282] 0 containers: []
	W1017 20:11:46.064195  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:11:46.064217  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:11:46.064234  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:11:46.085100  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:11:46.085130  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:11:46.155262  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:11:46.155281  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:11:46.155293  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:11:46.191249  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:11:46.191286  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:11:46.254854  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:11:46.254899  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:11:46.284189  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:11:46.284219  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:11:46.330545  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:11:46.330587  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:11:46.363441  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:11:46.363473  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Oct 17 20:11:19 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:19.144898953Z" level=info msg="Started container" PID=1716 containerID=f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper id=06fcca7e-1545-457c-a335-f55965d3152f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d2e176dea86e8b753724b307736b974d469fa9beed85f3e327265309a02e865
	Oct 17 20:11:20 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:20.095870522Z" level=info msg="Removing container: cc9fbaf140d10032ed4b9f836b67c1f2765d3394be508f4c1d2197d68dfc8cbd" id=df84eb8c-c326-4a14-93b9-4ee5cc71d3f6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:11:20 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:20.108079882Z" level=info msg="Removed container cc9fbaf140d10032ed4b9f836b67c1f2765d3394be508f4c1d2197d68dfc8cbd: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper" id=df84eb8c-c326-4a14-93b9-4ee5cc71d3f6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.123443257Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=001d432b-053e-48da-a708-405123846a98 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.135173098Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2d9fd98d-1135-4871-b18f-392b68b8ebc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.144301061Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6eae7233-d195-4087-a5ab-74f077124190 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.144669143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.191072172Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.191307476Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/245b9d436bc47f56fade302a7de9c473b44508fd36913af398a58626907dac2b/merged/etc/passwd: no such file or directory"
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.191344579Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/245b9d436bc47f56fade302a7de9c473b44508fd36913af398a58626907dac2b/merged/etc/group: no such file or directory"
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.191664937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.28708139Z" level=info msg="Created container 747137e5be4af0d94b6f109788cf1c1b9bafca36a0e7247a8a3f79cd60d8826b: kube-system/storage-provisioner/storage-provisioner" id=6eae7233-d195-4087-a5ab-74f077124190 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.287873223Z" level=info msg="Starting container: 747137e5be4af0d94b6f109788cf1c1b9bafca36a0e7247a8a3f79cd60d8826b" id=5b1984ef-58b4-48f1-9180-81e6517a2715 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:11:30 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:30.290049791Z" level=info msg="Started container" PID=1733 containerID=747137e5be4af0d94b6f109788cf1c1b9bafca36a0e7247a8a3f79cd60d8826b description=kube-system/storage-provisioner/storage-provisioner id=5b1984ef-58b4-48f1-9180-81e6517a2715 name=/runtime.v1.RuntimeService/StartContainer sandboxID=108af169199ad25456fa1076d65d0f31a742c90544a6b69c05024fd1f8684f93
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.01048975Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=93d2a4ec-8167-47e1-8ed9-9cefd7d95ed3 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.011511116Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ea123555-b96c-4919-b944-3838b4dcbefe name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.012505429Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper" id=9c2949bd-6108-42f3-8196-cacf7670427e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.012776313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.019378598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.019992672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.050913048Z" level=info msg="Created container 3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper" id=9c2949bd-6108-42f3-8196-cacf7670427e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.051648959Z" level=info msg="Starting container: 3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666" id=1f48cd45-d7ce-4f3a-8326-5903617da10e name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.053642051Z" level=info msg="Started container" PID=1768 containerID=3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper id=1f48cd45-d7ce-4f3a-8326-5903617da10e name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d2e176dea86e8b753724b307736b974d469fa9beed85f3e327265309a02e865
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.138997804Z" level=info msg="Removing container: f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea" id=093a694e-c337-4dec-8ee7-cc2c34c2affd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:11:35 old-k8s-version-726816 crio[565]: time="2025-10-17T20:11:35.150265618Z" level=info msg="Removed container f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2/dashboard-metrics-scraper" id=093a694e-c337-4dec-8ee7-cc2c34c2affd name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	3a364cb5d70c9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   3d2e176dea86e       dashboard-metrics-scraper-5f989dc9cf-lfwq2       kubernetes-dashboard
	747137e5be4af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   108af169199ad       storage-provisioner                              kube-system
	6fc9076dca48e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   32 seconds ago      Running             kubernetes-dashboard        0                   7cb20e4b89354       kubernetes-dashboard-8694d4445c-dkhv5            kubernetes-dashboard
	ebb776b4595c3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           49 seconds ago      Running             coredns                     0                   364f35a14eab9       coredns-5dd5756b68-xrnvz                         kube-system
	6f92088ac4c2c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   29a2a0d6d57c5       busybox                                          default
	91b37cb25594b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   5cea72cf33eeb       kindnet-9slhm                                    kube-system
	d366f49e228b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   108af169199ad       storage-provisioner                              kube-system
	c68be51b1893d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           49 seconds ago      Running             kube-proxy                  0                   6914ada52840c       kube-proxy-xp229                                 kube-system
	968b01f15b033       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   80b58ce7f52bc       kube-controller-manager-old-k8s-version-726816   kube-system
	7881cbacb992a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   eecf5e4d363ff       etcd-old-k8s-version-726816                      kube-system
	8d9c2dfa70a1e       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   100ae214e1dea       kube-scheduler-old-k8s-version-726816            kube-system
	1bc61bd7d0ccf       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   8a53f2948e6ae       kube-apiserver-old-k8s-version-726816            kube-system
	
	
	==> coredns [ebb776b4595c362bf346440793ab3e48e5a12e2379b9bcedfa1606c7e7878296] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38407 - 9919 "HINFO IN 7967464223475309590.1785099526618209064. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.111841044s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-726816
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-726816
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=old-k8s-version-726816
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_09_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:09:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-726816
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:11:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:11:28 +0000   Fri, 17 Oct 2025 20:09:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:11:28 +0000   Fri, 17 Oct 2025 20:09:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:11:28 +0000   Fri, 17 Oct 2025 20:09:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:11:28 +0000   Fri, 17 Oct 2025 20:10:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-726816
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                239cdd26-1e67-40fc-a3aa-17a6bcadd5b2
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-xrnvz                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-old-k8s-version-726816                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-9slhm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-726816             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-726816    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-xp229                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-726816             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-lfwq2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dkhv5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x9 over 2m2s)  kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node old-k8s-version-726816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node old-k8s-version-726816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node old-k8s-version-726816 event: Registered Node old-k8s-version-726816 in Controller
	  Normal  NodeReady                90s                  kubelet          Node old-k8s-version-726816 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node old-k8s-version-726816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node old-k8s-version-726816 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-726816 event: Registered Node old-k8s-version-726816 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [7881cbacb992a19527a25f5d1cce67db8caefd2e7da59b056d1c86a577aedfc1] <==
	{"level":"info","ts":"2025-10-17T20:10:55.575226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:10:55.575237Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:10:55.575309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-10-17T20:10:55.575391Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-10-17T20:10:55.575551Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:10:55.575586Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:10:55.577708Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-17T20:10:55.57817Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T20:10:55.578934Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-17T20:10:55.578979Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-17T20:10:55.578096Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T20:10:57.264769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-17T20:10:57.26483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-17T20:10:57.264878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-10-17T20:10:57.264902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-10-17T20:10:57.26491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-17T20:10:57.264924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-10-17T20:10:57.264937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-17T20:10:57.266195Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-726816 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T20:10:57.266219Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:10:57.266204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:10:57.266505Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T20:10:57.266535Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-17T20:10:57.267461Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T20:10:57.267625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 20:11:48 up  1:54,  0 user,  load average: 3.35, 3.49, 2.33
	Linux old-k8s-version-726816 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [91b37cb25594bd4a4037da457468ce8ab04be8d76be1ea150b98cac55be126b1] <==
	I1017 20:10:59.657405       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:10:59.657714       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1017 20:10:59.657917       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:10:59.657934       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:10:59.657960       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:10:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:10:59.859048       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:10:59.859164       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:10:59.859183       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:10:59.859572       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:11:00.257131       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:11:00.257178       1 metrics.go:72] Registering metrics
	I1017 20:11:00.335341       1 controller.go:711] "Syncing nftables rules"
	I1017 20:11:09.858914       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:11:09.858994       1 main.go:301] handling current node
	I1017 20:11:19.859429       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:11:19.859467       1 main.go:301] handling current node
	I1017 20:11:29.859303       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:11:29.859352       1 main.go:301] handling current node
	I1017 20:11:39.862622       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:11:39.862656       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1bc61bd7d0ccf139a7202056a01d7760285248ec1015158005831ced4f43e0e7] <==
	I1017 20:10:58.307881       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:10:58.323586       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1017 20:10:58.365851       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1017 20:10:58.365879       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1017 20:10:58.365894       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1017 20:10:58.365858       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1017 20:10:58.366009       1 aggregator.go:166] initial CRD sync complete...
	I1017 20:10:58.366025       1 autoregister_controller.go:141] Starting autoregister controller
	I1017 20:10:58.366032       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:10:58.366041       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:10:58.366082       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1017 20:10:58.366092       1 shared_informer.go:318] Caches are synced for configmaps
	I1017 20:10:58.366139       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1017 20:10:58.376183       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:10:59.159821       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 20:10:59.197465       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 20:10:59.219359       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:10:59.229322       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:10:59.237599       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 20:10:59.269196       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:10:59.279691       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.14.236"}
	I1017 20:10:59.311391       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.88.149"}
	I1017 20:11:11.152776       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 20:11:11.252013       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:11:11.353276       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [968b01f15b0332d8945cfd8c8d6e9d02cb2f9635511ccc519c0bcf9750467356] <==
	I1017 20:11:11.052949       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 20:11:11.356350       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1017 20:11:11.357648       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1017 20:11:11.366229       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-lfwq2"
	I1017 20:11:11.366257       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-dkhv5"
	I1017 20:11:11.366735       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:11:11.374525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.121774ms"
	I1017 20:11:11.374811       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.994172ms"
	I1017 20:11:11.381927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.336727ms"
	I1017 20:11:11.382242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.508µs"
	I1017 20:11:11.381998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.139566ms"
	I1017 20:11:11.382324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="36.924µs"
	I1017 20:11:11.390857       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.913µs"
	I1017 20:11:11.399353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.176µs"
	I1017 20:11:11.414014       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:11:11.414052       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 20:11:16.190208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.676575ms"
	I1017 20:11:16.190377       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="113.786µs"
	I1017 20:11:19.104920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.37µs"
	I1017 20:11:20.108767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.903µs"
	I1017 20:11:21.110652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="100.967µs"
	I1017 20:11:30.660127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.222555ms"
	I1017 20:11:30.660374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.592µs"
	I1017 20:11:35.150330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.923µs"
	I1017 20:11:41.688989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.611µs"
	
	
	==> kube-proxy [c68be51b1893d00600739733307a7ad07027891e96caa6eb528ee3a047f5c923] <==
	I1017 20:10:59.428447       1 server_others.go:69] "Using iptables proxy"
	I1017 20:10:59.438309       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1017 20:10:59.458846       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:10:59.461880       1 server_others.go:152] "Using iptables Proxier"
	I1017 20:10:59.461922       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 20:10:59.461933       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 20:10:59.461974       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 20:10:59.462282       1 server.go:846] "Version info" version="v1.28.0"
	I1017 20:10:59.462301       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:10:59.463037       1 config.go:97] "Starting endpoint slice config controller"
	I1017 20:10:59.463095       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 20:10:59.463138       1 config.go:188] "Starting service config controller"
	I1017 20:10:59.463148       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 20:10:59.463250       1 config.go:315] "Starting node config controller"
	I1017 20:10:59.463274       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 20:10:59.564281       1 shared_informer.go:318] Caches are synced for service config
	I1017 20:10:59.564304       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1017 20:10:59.564358       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8d9c2dfa70a1ee7b1f6e3a8806e27f4e0cc7037f6cac3b6bdd2e92b821979c8e] <==
	I1017 20:10:55.958439       1 serving.go:348] Generated self-signed cert in-memory
	W1017 20:10:58.292250       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:10:58.292285       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:10:58.292304       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:10:58.292313       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:10:58.323431       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1017 20:10:58.323578       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:10:58.325603       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:10:58.325821       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1017 20:10:58.326450       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1017 20:10:58.326533       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1017 20:10:58.426192       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 20:11:11 old-k8s-version-726816 kubelet[720]: I1017 20:11:11.376450     720 topology_manager.go:215] "Topology Admit Handler" podUID="8d572a9b-dd03-4904-83d4-3dfb0680522e" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-dkhv5"
	Oct 17 20:11:11 old-k8s-version-726816 kubelet[720]: I1017 20:11:11.491007     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8d572a9b-dd03-4904-83d4-3dfb0680522e-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-dkhv5\" (UID: \"8d572a9b-dd03-4904-83d4-3dfb0680522e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dkhv5"
	Oct 17 20:11:11 old-k8s-version-726816 kubelet[720]: I1017 20:11:11.491071     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qckdf\" (UniqueName: \"kubernetes.io/projected/67f5e328-8b65-4fa4-a45e-40382fe9fed8-kube-api-access-qckdf\") pod \"dashboard-metrics-scraper-5f989dc9cf-lfwq2\" (UID: \"67f5e328-8b65-4fa4-a45e-40382fe9fed8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2"
	Oct 17 20:11:11 old-k8s-version-726816 kubelet[720]: I1017 20:11:11.491214     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42d55\" (UniqueName: \"kubernetes.io/projected/8d572a9b-dd03-4904-83d4-3dfb0680522e-kube-api-access-42d55\") pod \"kubernetes-dashboard-8694d4445c-dkhv5\" (UID: \"8d572a9b-dd03-4904-83d4-3dfb0680522e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dkhv5"
	Oct 17 20:11:11 old-k8s-version-726816 kubelet[720]: I1017 20:11:11.491258     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/67f5e328-8b65-4fa4-a45e-40382fe9fed8-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-lfwq2\" (UID: \"67f5e328-8b65-4fa4-a45e-40382fe9fed8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2"
	Oct 17 20:11:16 old-k8s-version-726816 kubelet[720]: I1017 20:11:16.123129     720 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dkhv5" podStartSLOduration=1.121916445 podCreationTimestamp="2025-10-17 20:11:11 +0000 UTC" firstStartedPulling="2025-10-17 20:11:11.702326452 +0000 UTC m=+16.789863038" lastFinishedPulling="2025-10-17 20:11:15.703474244 +0000 UTC m=+20.791010836" observedRunningTime="2025-10-17 20:11:16.122663101 +0000 UTC m=+21.210199694" watchObservedRunningTime="2025-10-17 20:11:16.123064243 +0000 UTC m=+21.210600839"
	Oct 17 20:11:19 old-k8s-version-726816 kubelet[720]: I1017 20:11:19.090314     720 scope.go:117] "RemoveContainer" containerID="cc9fbaf140d10032ed4b9f836b67c1f2765d3394be508f4c1d2197d68dfc8cbd"
	Oct 17 20:11:20 old-k8s-version-726816 kubelet[720]: I1017 20:11:20.094453     720 scope.go:117] "RemoveContainer" containerID="cc9fbaf140d10032ed4b9f836b67c1f2765d3394be508f4c1d2197d68dfc8cbd"
	Oct 17 20:11:20 old-k8s-version-726816 kubelet[720]: I1017 20:11:20.094640     720 scope.go:117] "RemoveContainer" containerID="f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea"
	Oct 17 20:11:20 old-k8s-version-726816 kubelet[720]: E1017 20:11:20.095010     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lfwq2_kubernetes-dashboard(67f5e328-8b65-4fa4-a45e-40382fe9fed8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2" podUID="67f5e328-8b65-4fa4-a45e-40382fe9fed8"
	Oct 17 20:11:21 old-k8s-version-726816 kubelet[720]: I1017 20:11:21.098731     720 scope.go:117] "RemoveContainer" containerID="f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea"
	Oct 17 20:11:21 old-k8s-version-726816 kubelet[720]: E1017 20:11:21.099032     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lfwq2_kubernetes-dashboard(67f5e328-8b65-4fa4-a45e-40382fe9fed8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2" podUID="67f5e328-8b65-4fa4-a45e-40382fe9fed8"
	Oct 17 20:11:22 old-k8s-version-726816 kubelet[720]: I1017 20:11:22.100967     720 scope.go:117] "RemoveContainer" containerID="f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea"
	Oct 17 20:11:22 old-k8s-version-726816 kubelet[720]: E1017 20:11:22.101243     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lfwq2_kubernetes-dashboard(67f5e328-8b65-4fa4-a45e-40382fe9fed8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2" podUID="67f5e328-8b65-4fa4-a45e-40382fe9fed8"
	Oct 17 20:11:30 old-k8s-version-726816 kubelet[720]: I1017 20:11:30.122879     720 scope.go:117] "RemoveContainer" containerID="d366f49e228b9559f5390fd4d4d8cbe630c4d711c715aac5c52834352215ef1c"
	Oct 17 20:11:35 old-k8s-version-726816 kubelet[720]: I1017 20:11:35.009685     720 scope.go:117] "RemoveContainer" containerID="f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea"
	Oct 17 20:11:35 old-k8s-version-726816 kubelet[720]: I1017 20:11:35.137601     720 scope.go:117] "RemoveContainer" containerID="f32f8cb722fb3ec646fa0449231cae7dbb386fc837c8cb70aa8a220a41e0d5ea"
	Oct 17 20:11:35 old-k8s-version-726816 kubelet[720]: I1017 20:11:35.137854     720 scope.go:117] "RemoveContainer" containerID="3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666"
	Oct 17 20:11:35 old-k8s-version-726816 kubelet[720]: E1017 20:11:35.138237     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lfwq2_kubernetes-dashboard(67f5e328-8b65-4fa4-a45e-40382fe9fed8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2" podUID="67f5e328-8b65-4fa4-a45e-40382fe9fed8"
	Oct 17 20:11:41 old-k8s-version-726816 kubelet[720]: I1017 20:11:41.678204     720 scope.go:117] "RemoveContainer" containerID="3a364cb5d70c97d549391c50b7edca894746ea805134220e4dafcb695cec6666"
	Oct 17 20:11:41 old-k8s-version-726816 kubelet[720]: E1017 20:11:41.678471     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lfwq2_kubernetes-dashboard(67f5e328-8b65-4fa4-a45e-40382fe9fed8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lfwq2" podUID="67f5e328-8b65-4fa4-a45e-40382fe9fed8"
	Oct 17 20:11:44 old-k8s-version-726816 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:11:44 old-k8s-version-726816 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:11:44 old-k8s-version-726816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 20:11:44 old-k8s-version-726816 systemd[1]: kubelet.service: Consumed 1.551s CPU time.
	
	
	==> kubernetes-dashboard [6fc9076dca48eb3cdde728afd925cc98ddad1f05f397cf21464426ab3aba4eb1] <==
	2025/10/17 20:11:15 Using namespace: kubernetes-dashboard
	2025/10/17 20:11:15 Using in-cluster config to connect to apiserver
	2025/10/17 20:11:15 Using secret token for csrf signing
	2025/10/17 20:11:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:11:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:11:15 Successful initial request to the apiserver, version: v1.28.0
	2025/10/17 20:11:15 Generating JWE encryption key
	2025/10/17 20:11:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:11:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:11:15 Initializing JWE encryption key from synchronized object
	2025/10/17 20:11:16 Creating in-cluster Sidecar client
	2025/10/17 20:11:16 Serving insecurely on HTTP port: 9090
	2025/10/17 20:11:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:11:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:11:15 Starting overwatch
	
	
	==> storage-provisioner [747137e5be4af0d94b6f109788cf1c1b9bafca36a0e7247a8a3f79cd60d8826b] <==
	I1017 20:11:30.303959       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:11:30.315933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:11:30.315985       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1017 20:11:47.732696       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:11:47.732818       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eaf327a4-eed2-4b18-a7d0-89913f7f259a", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-726816_cbfa130d-bdc0-43e5-8089-09b4f1b9a251 became leader
	I1017 20:11:47.732968       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-726816_cbfa130d-bdc0-43e5-8089-09b4f1b9a251!
	I1017 20:11:47.833517       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-726816_cbfa130d-bdc0-43e5-8089-09b4f1b9a251!
	
	
	==> storage-provisioner [d366f49e228b9559f5390fd4d4d8cbe630c4d711c715aac5c52834352215ef1c] <==
	I1017 20:10:59.394166       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:11:29.396774       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-726816 -n old-k8s-version-726816
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-726816 -n old-k8s-version-726816: exit status 2 (361.751778ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-726816 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-449580 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-449580 --alsologtostderr -v=1: exit status 80 (2.074247526s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-449580 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:12:09.849972  380658 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:12:09.850472  380658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:12:09.850484  380658 out.go:374] Setting ErrFile to fd 2...
	I1017 20:12:09.850491  380658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:12:09.850885  380658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:12:09.851272  380658 out.go:368] Setting JSON to false
	I1017 20:12:09.851308  380658 mustload.go:65] Loading cluster: no-preload-449580
	I1017 20:12:09.851836  380658 config.go:182] Loaded profile config "no-preload-449580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:09.852473  380658 cli_runner.go:164] Run: docker container inspect no-preload-449580 --format={{.State.Status}}
	I1017 20:12:09.874412  380658 host.go:66] Checking if "no-preload-449580" exists ...
	I1017 20:12:09.874761  380658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:12:09.964651  380658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-17 20:12:09.952642116 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:12:09.966508  380658 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-449580 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:12:09.968721  380658 out.go:179] * Pausing node no-preload-449580 ... 
	I1017 20:12:09.970339  380658 host.go:66] Checking if "no-preload-449580" exists ...
	I1017 20:12:09.970589  380658 ssh_runner.go:195] Run: systemctl --version
	I1017 20:12:09.970627  380658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-449580
	I1017 20:12:09.992326  380658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/no-preload-449580/id_rsa Username:docker}
	I1017 20:12:10.093628  380658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:12:10.115449  380658 pause.go:52] kubelet running: true
	I1017 20:12:10.115518  380658 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:12:10.314348  380658 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:12:10.314481  380658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:12:10.402586  380658 cri.go:89] found id: "657fe0dbb4b0cba7157b7d8d6dd281cba239e2b86568e955ef7820a3d73b740f"
	I1017 20:12:10.402621  380658 cri.go:89] found id: "e4cdebb7a5f1e03ca1d6840a7e5d790daca58249854250430492d1c216465dc2"
	I1017 20:12:10.402628  380658 cri.go:89] found id: "fdcad2e90c8dcf59aada3333930294077886b20dc4ffa931ec9d1f20d86de19d"
	I1017 20:12:10.402634  380658 cri.go:89] found id: "b2d438515e445e965a062ab1d3673eae9c240a5640ff6c902c5709be255d0b55"
	I1017 20:12:10.402638  380658 cri.go:89] found id: "2065ed557a2ff9e4311486d101858ee5b30b748b19f878da0d5158806d03a998"
	I1017 20:12:10.402645  380658 cri.go:89] found id: "344d142d37fe5e0cf83f172832d2f0380baafcfe5af95563d75af080c8f38c3c"
	I1017 20:12:10.402650  380658 cri.go:89] found id: "6cf770e38746c4716bb308f95e151bdd97000b0a2142f8c26a0763b88060594f"
	I1017 20:12:10.402654  380658 cri.go:89] found id: "09d3164355d524c8b81db0b45da6184b8608f2453c76034f04243ff5a2366382"
	I1017 20:12:10.402658  380658 cri.go:89] found id: "da4d6ced5b128794ebcf1eb3fba8085c8b428be8cc20e7b0cbbeb23351ceb4d4"
	I1017 20:12:10.402669  380658 cri.go:89] found id: "caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4"
	I1017 20:12:10.402674  380658 cri.go:89] found id: "1995d053f3c779ae7a5d37d3f2392fc388fb7eaf8a318c4c16bc4e63cc6cd09b"
	I1017 20:12:10.402678  380658 cri.go:89] found id: ""
	I1017 20:12:10.402732  380658 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:12:10.418376  380658 retry.go:31] will retry after 244.29257ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:12:10Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:12:10.662885  380658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:12:10.680417  380658 pause.go:52] kubelet running: false
	I1017 20:12:10.680484  380658 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:12:10.887849  380658 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:12:10.887951  380658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:12:10.982360  380658 cri.go:89] found id: "657fe0dbb4b0cba7157b7d8d6dd281cba239e2b86568e955ef7820a3d73b740f"
	I1017 20:12:10.982404  380658 cri.go:89] found id: "e4cdebb7a5f1e03ca1d6840a7e5d790daca58249854250430492d1c216465dc2"
	I1017 20:12:10.982412  380658 cri.go:89] found id: "fdcad2e90c8dcf59aada3333930294077886b20dc4ffa931ec9d1f20d86de19d"
	I1017 20:12:10.982419  380658 cri.go:89] found id: "b2d438515e445e965a062ab1d3673eae9c240a5640ff6c902c5709be255d0b55"
	I1017 20:12:10.982426  380658 cri.go:89] found id: "2065ed557a2ff9e4311486d101858ee5b30b748b19f878da0d5158806d03a998"
	I1017 20:12:10.982432  380658 cri.go:89] found id: "344d142d37fe5e0cf83f172832d2f0380baafcfe5af95563d75af080c8f38c3c"
	I1017 20:12:10.982438  380658 cri.go:89] found id: "6cf770e38746c4716bb308f95e151bdd97000b0a2142f8c26a0763b88060594f"
	I1017 20:12:10.982444  380658 cri.go:89] found id: "09d3164355d524c8b81db0b45da6184b8608f2453c76034f04243ff5a2366382"
	I1017 20:12:10.982449  380658 cri.go:89] found id: "da4d6ced5b128794ebcf1eb3fba8085c8b428be8cc20e7b0cbbeb23351ceb4d4"
	I1017 20:12:10.982459  380658 cri.go:89] found id: "caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4"
	I1017 20:12:10.982468  380658 cri.go:89] found id: "1995d053f3c779ae7a5d37d3f2392fc388fb7eaf8a318c4c16bc4e63cc6cd09b"
	I1017 20:12:10.982473  380658 cri.go:89] found id: ""
	I1017 20:12:10.982530  380658 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:12:10.996572  380658 retry.go:31] will retry after 529.44219ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:12:10Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:12:11.526924  380658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:12:11.542758  380658 pause.go:52] kubelet running: false
	I1017 20:12:11.542834  380658 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:12:11.742576  380658 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:12:11.742691  380658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:12:11.839238  380658 cri.go:89] found id: "657fe0dbb4b0cba7157b7d8d6dd281cba239e2b86568e955ef7820a3d73b740f"
	I1017 20:12:11.839260  380658 cri.go:89] found id: "e4cdebb7a5f1e03ca1d6840a7e5d790daca58249854250430492d1c216465dc2"
	I1017 20:12:11.839263  380658 cri.go:89] found id: "fdcad2e90c8dcf59aada3333930294077886b20dc4ffa931ec9d1f20d86de19d"
	I1017 20:12:11.839266  380658 cri.go:89] found id: "b2d438515e445e965a062ab1d3673eae9c240a5640ff6c902c5709be255d0b55"
	I1017 20:12:11.839269  380658 cri.go:89] found id: "2065ed557a2ff9e4311486d101858ee5b30b748b19f878da0d5158806d03a998"
	I1017 20:12:11.839272  380658 cri.go:89] found id: "344d142d37fe5e0cf83f172832d2f0380baafcfe5af95563d75af080c8f38c3c"
	I1017 20:12:11.839274  380658 cri.go:89] found id: "6cf770e38746c4716bb308f95e151bdd97000b0a2142f8c26a0763b88060594f"
	I1017 20:12:11.839277  380658 cri.go:89] found id: "09d3164355d524c8b81db0b45da6184b8608f2453c76034f04243ff5a2366382"
	I1017 20:12:11.839279  380658 cri.go:89] found id: "da4d6ced5b128794ebcf1eb3fba8085c8b428be8cc20e7b0cbbeb23351ceb4d4"
	I1017 20:12:11.839290  380658 cri.go:89] found id: "caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4"
	I1017 20:12:11.839292  380658 cri.go:89] found id: "1995d053f3c779ae7a5d37d3f2392fc388fb7eaf8a318c4c16bc4e63cc6cd09b"
	I1017 20:12:11.839295  380658 cri.go:89] found id: ""
	I1017 20:12:11.839345  380658 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:12:11.855812  380658 out.go:203] 
	W1017 20:12:11.857017  380658 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:12:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:12:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:12:11.857043  380658 out.go:285] * 
	* 
	W1017 20:12:11.861820  380658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:12:11.863357  380658 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-449580 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-449580
helpers_test.go:243: (dbg) docker inspect no-preload-449580:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb",
	        "Created": "2025-10-17T20:09:52.380878563Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 369903,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:11:09.555461475Z",
	            "FinishedAt": "2025-10-17T20:11:08.726874589Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/hosts",
	        "LogPath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb-json.log",
	        "Name": "/no-preload-449580",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-449580:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-449580",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb",
	                "LowerDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-449580",
	                "Source": "/var/lib/docker/volumes/no-preload-449580/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-449580",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-449580",
	                "name.minikube.sigs.k8s.io": "no-preload-449580",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "421ae79c3faa207b2636cb7cbd1afde746b1c221b0a298f154415a66dec8fc3d",
	            "SandboxKey": "/var/run/docker/netns/421ae79c3faa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-449580": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:38:8d:43:88:9d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b82ebd045e12b91841d651f11549608344307c54224bf0d85f675490a33cca93",
	                    "EndpointID": "7ffbb798f3421d91e64321b56d0ca6d197c9fbedd8cfa5316ca3e704d6a91a12",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-449580",
	                        "11713a3ef64d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449580 -n no-preload-449580
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449580 -n no-preload-449580: exit status 2 (362.924732ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-449580 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-449580 logs -n 25: (1.333204492s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-660693 │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ start   │ -p running-upgrade-097245 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p missing-upgrade-159057                                                                                                                                                                                                                     │ missing-upgrade-159057    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p force-systemd-flag-599050 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p running-upgrade-097245                                                                                                                                                                                                                     │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ ssh     │ force-systemd-flag-599050 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p force-systemd-flag-599050                                                                                                                                                                                                                  │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-726816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p old-k8s-version-726816 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-726816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-449580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p no-preload-449580 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable dashboard -p no-preload-449580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ image   │ old-k8s-version-726816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ pause   │ -p old-k8s-version-726816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488        │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	│ start   │ -p cert-expiration-202048 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-202048    │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ image   │ no-preload-449580 image list --format=json                                                                                                                                                                                                    │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ pause   │ -p no-preload-449580 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:12:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:12:05.364535  379394 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:12:05.364806  379394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:12:05.364810  379394 out.go:374] Setting ErrFile to fd 2...
	I1017 20:12:05.364816  379394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:12:05.365107  379394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:12:05.365679  379394 out.go:368] Setting JSON to false
	I1017 20:12:05.367244  379394 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6873,"bootTime":1760725052,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:12:05.367425  379394 start.go:141] virtualization: kvm guest
	I1017 20:12:05.369905  379394 out.go:179] * [cert-expiration-202048] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:12:05.371442  379394 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:12:05.371426  379394 notify.go:220] Checking for updates...
	I1017 20:12:05.373013  379394 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:12:05.374467  379394 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:12:05.375976  379394 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:12:05.377314  379394 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:12:05.378648  379394 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:12:05.380453  379394 config.go:182] Loaded profile config "cert-expiration-202048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:05.381232  379394 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:12:05.414382  379394 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:12:05.414539  379394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:12:05.490109  379394 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 20:12:05.478083658 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:12:05.490205  379394 docker.go:318] overlay module found
	I1017 20:12:05.494008  379394 out.go:179] * Using the docker driver based on existing profile
	I1017 20:12:05.495924  379394 start.go:305] selected driver: docker
	I1017 20:12:05.495938  379394 start.go:925] validating driver "docker" against &{Name:cert-expiration-202048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-202048 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:05.496089  379394 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:12:05.496918  379394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:12:05.565931  379394 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 20:12:05.553819033 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:12:05.566268  379394 cni.go:84] Creating CNI manager for ""
	I1017 20:12:05.566346  379394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:05.566393  379394 start.go:349] cluster config:
	{Name:cert-expiration-202048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-202048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:05.569263  379394 out.go:179] * Starting "cert-expiration-202048" primary control-plane node in "cert-expiration-202048" cluster
	I1017 20:12:05.570667  379394 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:12:05.572167  379394 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:12:05.573537  379394 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:05.573579  379394 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:12:05.573595  379394 cache.go:58] Caching tarball of preloaded images
	I1017 20:12:05.573648  379394 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:12:05.573713  379394 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:12:05.573723  379394 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:12:05.573859  379394 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/config.json ...
	I1017 20:12:05.595769  379394 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:12:05.595785  379394 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:12:05.595804  379394 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:12:05.595833  379394 start.go:360] acquireMachinesLock for cert-expiration-202048: {Name:mkeb350189e5dcd93a71dc9a551cd333325075c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:12:05.595899  379394 start.go:364] duration metric: took 46.623µs to acquireMachinesLock for "cert-expiration-202048"
	I1017 20:12:05.595917  379394 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:12:05.595922  379394 fix.go:54] fixHost starting: 
	I1017 20:12:05.596143  379394 cli_runner.go:164] Run: docker container inspect cert-expiration-202048 --format={{.State.Status}}
	I1017 20:12:05.614896  379394 fix.go:112] recreateIfNeeded on cert-expiration-202048: state=Running err=<nil>
	W1017 20:12:05.614929  379394 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:12:04.899994  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:12:04.900450  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:12:04.900511  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:12:04.900571  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:12:04.929112  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:04.929143  344862 cri.go:89] found id: ""
	I1017 20:12:04.929155  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:12:04.929219  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:04.933579  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:12:04.933650  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:12:04.963457  344862 cri.go:89] found id: ""
	I1017 20:12:04.963492  344862 logs.go:282] 0 containers: []
	W1017 20:12:04.963505  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:12:04.963512  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:12:04.963576  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:12:04.994022  344862 cri.go:89] found id: ""
	I1017 20:12:04.994050  344862 logs.go:282] 0 containers: []
	W1017 20:12:04.994062  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:12:04.994075  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:12:04.994147  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:12:05.023819  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:05.023846  344862 cri.go:89] found id: ""
	I1017 20:12:05.023857  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:12:05.023926  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:05.028219  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:12:05.028287  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:12:05.059680  344862 cri.go:89] found id: ""
	I1017 20:12:05.059711  344862 logs.go:282] 0 containers: []
	W1017 20:12:05.059722  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:12:05.059730  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:12:05.059811  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:12:05.089000  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:05.089023  344862 cri.go:89] found id: ""
	I1017 20:12:05.089031  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:12:05.089092  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:05.093375  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:12:05.093452  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:12:05.122730  344862 cri.go:89] found id: ""
	I1017 20:12:05.122774  344862 logs.go:282] 0 containers: []
	W1017 20:12:05.122786  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:12:05.122795  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:12:05.122858  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:12:05.152104  344862 cri.go:89] found id: ""
	I1017 20:12:05.152138  344862 logs.go:282] 0 containers: []
	W1017 20:12:05.152152  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:12:05.152163  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:05.152178  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:12:05.199349  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:05.199388  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:05.234068  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:12:05.234101  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:12:05.322838  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:05.322874  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:05.344060  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:12:05.344091  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:12:05.412759  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:12:05.412782  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:12:05.412800  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:05.462272  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:05.462323  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:05.532655  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:12:05.532717  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:03.008445  376518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.crt.8a4f5dce ...
	I1017 20:12:03.008475  376518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.crt.8a4f5dce: {Name:mk81a89ba9e4fdfb95ee5422fb1576cd0840c0d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:03.008674  376518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.key.8a4f5dce ...
	I1017 20:12:03.008691  376518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.key.8a4f5dce: {Name:mk357bd95fee2a329f370077c6a642cb4659a2ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:03.008828  376518 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.crt.8a4f5dce -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.crt
	I1017 20:12:03.008940  376518 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.key.8a4f5dce -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.key
	I1017 20:12:03.009032  376518 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.key
	I1017 20:12:03.009053  376518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.crt with IP's: []
	I1017 20:12:03.340983  376518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.crt ...
	I1017 20:12:03.341011  376518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.crt: {Name:mk5676468906393c987db64c1bb5ac4d5655daed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:03.341183  376518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.key ...
	I1017 20:12:03.341196  376518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.key: {Name:mkf55535b7cb3665bf3c84db43c37e9a25a285ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:03.341386  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:12:03.341424  376518 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:12:03.341431  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:12:03.341456  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:12:03.341478  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:12:03.341499  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:12:03.341539  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:03.342170  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:12:03.362349  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:12:03.382466  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:12:03.401412  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:12:03.420856  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1017 20:12:03.441327  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:12:03.461491  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:12:03.481584  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:12:03.501450  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:12:03.523319  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:12:03.543170  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:12:03.562794  376518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:12:03.578364  376518 ssh_runner.go:195] Run: openssl version
	I1017 20:12:03.584970  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:12:03.594702  376518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:12:03.599190  376518 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:12:03.599256  376518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:12:03.634612  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:12:03.643985  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:12:03.653093  376518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:03.657446  376518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:03.657511  376518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:03.693092  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:12:03.702729  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:12:03.712510  376518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:12:03.717555  376518 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:12:03.717622  376518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:12:03.754542  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:12:03.763952  376518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:12:03.767937  376518 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:12:03.767998  376518 kubeadm.go:400] StartCluster: {Name:embed-certs-051488 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-051488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:03.768078  376518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:12:03.768132  376518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:12:03.797173  376518 cri.go:89] found id: ""
	I1017 20:12:03.797243  376518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:12:03.806231  376518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:12:03.814593  376518 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:12:03.814651  376518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:12:03.823363  376518 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:12:03.823392  376518 kubeadm.go:157] found existing configuration files:
	
	I1017 20:12:03.823447  376518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:12:03.831826  376518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:12:03.831888  376518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:12:03.840231  376518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:12:03.849118  376518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:12:03.849176  376518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:12:03.857308  376518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:12:03.865537  376518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:12:03.865594  376518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:12:03.873870  376518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:12:03.883348  376518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:12:03.883415  376518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:12:03.892654  376518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:12:03.959630  376518 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 20:12:04.024087  376518 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:12:05.617002  379394 out.go:252] * Updating the running docker "cert-expiration-202048" container ...
	I1017 20:12:05.617039  379394 machine.go:93] provisionDockerMachine start ...
	I1017 20:12:05.617125  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:05.636143  379394 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:05.636360  379394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1017 20:12:05.636366  379394 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:12:05.774043  379394 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-202048
	
	I1017 20:12:05.774074  379394 ubuntu.go:182] provisioning hostname "cert-expiration-202048"
	I1017 20:12:05.774138  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:05.793892  379394 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:05.794125  379394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1017 20:12:05.794134  379394 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-202048 && echo "cert-expiration-202048" | sudo tee /etc/hostname
	I1017 20:12:05.943981  379394 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-202048
	
	I1017 20:12:05.944073  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:05.964393  379394 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:05.964701  379394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1017 20:12:05.964722  379394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-202048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-202048/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-202048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:12:06.104266  379394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:12:06.104290  379394 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:12:06.104310  379394 ubuntu.go:190] setting up certificates
	I1017 20:12:06.104320  379394 provision.go:84] configureAuth start
	I1017 20:12:06.104374  379394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-202048
	I1017 20:12:06.126268  379394 provision.go:143] copyHostCerts
	I1017 20:12:06.126332  379394 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:12:06.126345  379394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:12:06.126411  379394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:12:06.126515  379394 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:12:06.126519  379394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:12:06.126544  379394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:12:06.126614  379394 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:12:06.126617  379394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:12:06.126638  379394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:12:06.126697  379394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-202048 san=[127.0.0.1 192.168.85.2 cert-expiration-202048 localhost minikube]
	I1017 20:12:06.234140  379394 provision.go:177] copyRemoteCerts
	I1017 20:12:06.234200  379394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:12:06.234233  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:06.253484  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:06.354265  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:12:06.375482  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1017 20:12:06.397465  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:12:06.418758  379394 provision.go:87] duration metric: took 314.409814ms to configureAuth
	I1017 20:12:06.418782  379394 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:12:06.418967  379394 config.go:182] Loaded profile config "cert-expiration-202048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:06.419052  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:06.440072  379394 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:06.440305  379394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1017 20:12:06.440331  379394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:12:06.763041  379394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:12:06.763059  379394 machine.go:96] duration metric: took 1.146012668s to provisionDockerMachine
	I1017 20:12:06.763072  379394 start.go:293] postStartSetup for "cert-expiration-202048" (driver="docker")
	I1017 20:12:06.763085  379394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:12:06.763163  379394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:12:06.763207  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:06.782548  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:06.882199  379394 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:12:06.886354  379394 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:12:06.886379  379394 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:12:06.886392  379394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:12:06.886458  379394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:12:06.886546  379394 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:12:06.886663  379394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:12:06.896350  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:06.917281  379394 start.go:296] duration metric: took 154.191627ms for postStartSetup
	I1017 20:12:06.917379  379394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:12:06.917419  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:06.938672  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:07.034613  379394 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:12:07.040083  379394 fix.go:56] duration metric: took 1.44415046s for fixHost
	I1017 20:12:07.040105  379394 start.go:83] releasing machines lock for "cert-expiration-202048", held for 1.444197486s
	I1017 20:12:07.040202  379394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-202048
	I1017 20:12:07.059140  379394 ssh_runner.go:195] Run: cat /version.json
	I1017 20:12:07.059191  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:07.059211  379394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:12:07.059259  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:07.079960  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:07.080002  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:07.176508  379394 ssh_runner.go:195] Run: systemctl --version
	I1017 20:12:07.242813  379394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:12:07.282734  379394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:12:07.288191  379394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:12:07.288258  379394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:12:07.297579  379394 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:12:07.297600  379394 start.go:495] detecting cgroup driver to use...
	I1017 20:12:07.297635  379394 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:12:07.297686  379394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:12:07.313866  379394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:12:07.327689  379394 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:12:07.327769  379394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:12:07.344276  379394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:12:07.358938  379394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:12:07.484256  379394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:12:07.603067  379394 docker.go:234] disabling docker service ...
	I1017 20:12:07.603119  379394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:12:07.620392  379394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:12:07.635114  379394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:12:07.759185  379394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:12:07.876416  379394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:12:07.890218  379394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:12:07.906594  379394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:12:07.906648  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.917970  379394 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:12:07.918062  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.928773  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.939654  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.949962  379394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:12:07.959959  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.977414  379394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.987258  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.997929  379394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:12:08.007320  379394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:12:08.015855  379394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:08.155227  379394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:12:08.342347  379394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:12:08.342404  379394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:12:08.346847  379394 start.go:563] Will wait 60s for crictl version
	I1017 20:12:08.346897  379394 ssh_runner.go:195] Run: which crictl
	I1017 20:12:08.351338  379394 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:12:08.380244  379394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:12:08.380321  379394 ssh_runner.go:195] Run: crio --version
	I1017 20:12:08.413225  379394 ssh_runner.go:195] Run: crio --version
	I1017 20:12:08.447822  379394 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:12:08.449406  379394 cli_runner.go:164] Run: docker network inspect cert-expiration-202048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:12:08.471543  379394 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 20:12:08.476539  379394 kubeadm.go:883] updating cluster {Name:cert-expiration-202048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-202048 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:12:08.476671  379394 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:08.476727  379394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:08.515853  379394 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:08.515867  379394 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:12:08.515923  379394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:08.548446  379394 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:08.548462  379394 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:12:08.548471  379394 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1017 20:12:08.548596  379394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-202048 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-202048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:12:08.548673  379394 ssh_runner.go:195] Run: crio config
	I1017 20:12:08.619390  379394 cni.go:84] Creating CNI manager for ""
	I1017 20:12:08.619404  379394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:08.619420  379394 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:12:08.619467  379394 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-202048 NodeName:cert-expiration-202048 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:12:08.619641  379394 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-202048"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:12:08.619710  379394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:12:08.632175  379394 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:12:08.632237  379394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:12:08.642841  379394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1017 20:12:08.659652  379394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:12:08.676528  379394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1017 20:12:08.691677  379394 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:12:08.695846  379394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:08.825164  379394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:12:08.838907  379394 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048 for IP: 192.168.85.2
	I1017 20:12:08.838920  379394 certs.go:195] generating shared ca certs ...
	I1017 20:12:08.838934  379394 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:08.839105  379394 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:12:08.839149  379394 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:12:08.839155  379394 certs.go:257] generating profile certs ...
	W1017 20:12:08.839277  379394 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1017 20:12:08.839299  379394 certs.go:624] cert expired /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.crt: expiration: 2025-10-17 20:11:50 +0000 UTC, now: 2025-10-17 20:12:08.839293005 +0000 UTC m=+3.521950829
	I1017 20:12:08.839424  379394 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.key
	I1017 20:12:08.839447  379394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.crt with IP's: []
	I1017 20:12:09.922147  379394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.crt ...
	I1017 20:12:09.922172  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.crt: {Name:mk9f34fdcdf0239482f54154fd9e382dad0e337b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:09.922367  379394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.key ...
	I1017 20:12:09.922384  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.key: {Name:mkc8709422661d1292ff8c373c083118adc912e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1017 20:12:09.922620  379394 out.go:285] ! Certificate apiserver.crt.05eb0b2e has expired. Generating a new one...
	I1017 20:12:09.922642  379394 certs.go:624] cert expired /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt.05eb0b2e: expiration: 2025-10-17 20:11:50 +0000 UTC, now: 2025-10-17 20:12:09.922635172 +0000 UTC m=+4.605292995
	I1017 20:12:09.922763  379394 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key.05eb0b2e
	I1017 20:12:09.922782  379394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt.05eb0b2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1017 20:12:10.705875  379394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt.05eb0b2e ...
	I1017 20:12:10.705900  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt.05eb0b2e: {Name:mk2123b7facb3f90f5910d798f6cfbb4edbf0768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:10.706067  379394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key.05eb0b2e ...
	I1017 20:12:10.706079  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key.05eb0b2e: {Name:mk6326cde57a7fbe0db60280bf5e7459790a7539 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:10.706166  379394 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt.05eb0b2e -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt
	I1017 20:12:10.706349  379394 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key.05eb0b2e -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key
	W1017 20:12:10.706558  379394 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1017 20:12:10.706580  379394 certs.go:624] cert expired /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.crt: expiration: 2025-10-17 20:11:50 +0000 UTC, now: 2025-10-17 20:12:10.706572146 +0000 UTC m=+5.389229972
	I1017 20:12:10.706668  379394 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.key
	I1017 20:12:10.706693  379394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.crt with IP's: []
	I1017 20:12:10.799084  379394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.crt ...
	I1017 20:12:10.799103  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.crt: {Name:mka0331a625f20ecc1c1fe9c075609218787e89d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:10.799243  379394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.key ...
	I1017 20:12:10.799251  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.key: {Name:mk6b8b6c1d33a1cc431ca3873618ea9a7b9fd956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:10.799417  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:12:10.799448  379394 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:12:10.799454  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:12:10.799476  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:12:10.799498  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:12:10.799516  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:12:10.799551  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:10.800170  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:12:10.827565  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:12:10.850951  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:12:10.874513  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:12:10.895447  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1017 20:12:10.925615  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:12:10.949443  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:12:10.972544  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:12:10.997917  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:12:11.020290  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:12:11.040629  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:12:11.069907  379394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:12:11.085415  379394 ssh_runner.go:195] Run: openssl version
	I1017 20:12:11.092421  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:12:11.102163  379394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:12:11.106317  379394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:12:11.106379  379394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:12:11.149834  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:12:11.159508  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:12:11.169068  379394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:11.173294  379394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:11.173356  379394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:11.230143  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:12:11.240806  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:12:11.252293  379394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:12:11.256757  379394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:12:11.256811  379394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:12:11.308008  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:12:11.320637  379394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:12:11.327649  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:12:11.369961  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:12:11.424851  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:12:11.482154  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:12:11.526249  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:12:11.575512  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:12:11.629906  379394 kubeadm.go:400] StartCluster: {Name:cert-expiration-202048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-202048 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:11.629999  379394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:12:11.630090  379394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:12:11.661827  379394 cri.go:89] found id: "9781483db16aa60d939b68115ffc0d63fa8e9f9decddc2d9d577e597adfe0c8d"
	I1017 20:12:11.661854  379394 cri.go:89] found id: "9088d862c033e96c67db030cb3c98053254b4e1c7d3cc60b9da67682246423ce"
	I1017 20:12:11.661858  379394 cri.go:89] found id: "4c9fe1d4de02d3e2a08e5e7ac998200b0cfa688626d41b9372b316c6d80e099f"
	I1017 20:12:11.661862  379394 cri.go:89] found id: "61e19a54d084a7370b2b05fd3684396dd3b89390517bfd29786c6f87a60c4e2a"
	I1017 20:12:11.661866  379394 cri.go:89] found id: "22276078f99298494af62eaf58e20b749dc111d4521573054cc3230e54426ea3"
	I1017 20:12:11.661870  379394 cri.go:89] found id: "47c5210d4f132802b5ce2ac954a76ac57878435d16216ff828796ce34f9a70bf"
	I1017 20:12:11.661873  379394 cri.go:89] found id: "442bdd88922d2c075abb61d1c000af32fadeacf6aa883612a19a60bc701a4ec1"
	I1017 20:12:11.661875  379394 cri.go:89] found id: "d437f4da5b6f02ca2d4e81fc31e407b12d81134c7c6f7a1d22e679e9e96237ca"
	I1017 20:12:11.661878  379394 cri.go:89] found id: ""
	I1017 20:12:11.661922  379394 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:12:11.675236  379394 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:12:11Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:12:11.675303  379394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:12:11.685164  379394 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:12:11.685178  379394 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:12:11.685242  379394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:12:11.695145  379394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:12:11.695906  379394 kubeconfig.go:125] found "cert-expiration-202048" server: "https://192.168.85.2:8443"
	I1017 20:12:11.697958  379394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:12:11.707208  379394 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1017 20:12:11.707240  379394 kubeadm.go:601] duration metric: took 22.056379ms to restartPrimaryControlPlane
	I1017 20:12:11.707249  379394 kubeadm.go:402] duration metric: took 77.353296ms to StartCluster
	I1017 20:12:11.707270  379394 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:11.707339  379394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:12:11.708472  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:11.708721  379394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:12:11.708852  379394 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:12:11.708944  379394 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-202048"
	I1017 20:12:11.708961  379394 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-202048"
	W1017 20:12:11.708967  379394 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:12:11.708968  379394 config.go:182] Loaded profile config "cert-expiration-202048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:11.708983  379394 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-202048"
	I1017 20:12:11.708996  379394 host.go:66] Checking if "cert-expiration-202048" exists ...
	I1017 20:12:11.709002  379394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-202048"
	I1017 20:12:11.709358  379394 cli_runner.go:164] Run: docker container inspect cert-expiration-202048 --format={{.State.Status}}
	I1017 20:12:11.709490  379394 cli_runner.go:164] Run: docker container inspect cert-expiration-202048 --format={{.State.Status}}
	I1017 20:12:11.711768  379394 out.go:179] * Verifying Kubernetes components...
	I1017 20:12:11.713341  379394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:11.735195  379394 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-202048"
	W1017 20:12:11.735209  379394 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:12:11.735239  379394 host.go:66] Checking if "cert-expiration-202048" exists ...
	I1017 20:12:11.735728  379394 cli_runner.go:164] Run: docker container inspect cert-expiration-202048 --format={{.State.Status}}
	I1017 20:12:11.736236  379394 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Oct 17 20:11:31 no-preload-449580 crio[567]: time="2025-10-17T20:11:31.242723806Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:11:31 no-preload-449580 crio[567]: time="2025-10-17T20:11:31.246232438Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:11:31 no-preload-449580 crio[567]: time="2025-10-17T20:11:31.246263491Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.416647568Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9cf720ef-c43a-4cd3-85a8-cc86d079deb0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.419222003Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=15ac0d5a-7279-4b55-a265-f748cb5877e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.422528726Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr/dashboard-metrics-scraper" id=0aeea7a3-a8d8-426b-8a14-2662672ee08e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.42488075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.433026261Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.433516092Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.45797253Z" level=info msg="Created container caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr/dashboard-metrics-scraper" id=0aeea7a3-a8d8-426b-8a14-2662672ee08e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.458674374Z" level=info msg="Starting container: caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4" id=9e48aac6-ff7c-4fa2-bcd4-59dfb11031bc name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.460559974Z" level=info msg="Started container" PID=1754 containerID=caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr/dashboard-metrics-scraper id=9e48aac6-ff7c-4fa2-bcd4-59dfb11031bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=70d3b8cefc8fafede22d0b6a2db04634f0b3726af2caa390c639178bfbf24664
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.512244108Z" level=info msg="Removing container: 210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140" id=272c2702-5661-4235-bd70-7da5069a96bb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.52491767Z" level=info msg="Removed container 210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr/dashboard-metrics-scraper" id=272c2702-5661-4235-bd70-7da5069a96bb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.535400149Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=92d58921-e83d-4c2f-a9c8-c003f430def4 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.536342094Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d312602d-0f1b-4fb5-9f92-51bc688c2c05 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.537376188Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8d64a9ba-a2da-4b1c-a5f1-8dd2ed491a94 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.537629211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.54188964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.542084088Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4505dd81934e60c7d8f82423cb0a76a23e335043c4c523bf92b15039b87faab3/merged/etc/passwd: no such file or directory"
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.542118973Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4505dd81934e60c7d8f82423cb0a76a23e335043c4c523bf92b15039b87faab3/merged/etc/group: no such file or directory"
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.542375648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.568891266Z" level=info msg="Created container 657fe0dbb4b0cba7157b7d8d6dd281cba239e2b86568e955ef7820a3d73b740f: kube-system/storage-provisioner/storage-provisioner" id=8d64a9ba-a2da-4b1c-a5f1-8dd2ed491a94 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.56972631Z" level=info msg="Starting container: 657fe0dbb4b0cba7157b7d8d6dd281cba239e2b86568e955ef7820a3d73b740f" id=9df277af-32c1-4318-8284-37c51afa38c5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.572410909Z" level=info msg="Started container" PID=1768 containerID=657fe0dbb4b0cba7157b7d8d6dd281cba239e2b86568e955ef7820a3d73b740f description=kube-system/storage-provisioner/storage-provisioner id=9df277af-32c1-4318-8284-37c51afa38c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d10988d30cae04251eb02c520016bc2feef2279435feda27594089c2bb27bd61
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	657fe0dbb4b0c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   d10988d30cae0       storage-provisioner                          kube-system
	caf2282f6c9ba       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   70d3b8cefc8fa       dashboard-metrics-scraper-6ffb444bf9-gqppr   kubernetes-dashboard
	1995d053f3c77       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   ab9f36dc6b195       kubernetes-dashboard-855c9754f9-dkzr6        kubernetes-dashboard
	ac287094e0df1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   a582898b61e4a       busybox                                      default
	e4cdebb7a5f1e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   58862e03014ba       coredns-66bc5c9577-p4n86                     kube-system
	fdcad2e90c8dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   d10988d30cae0       storage-provisioner                          kube-system
	b2d438515e445       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   46ffb5bc7e724       kindnet-9xg9h                                kube-system
	2065ed557a2ff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   6c467c64199a9       kube-proxy-m5g7f                             kube-system
	344d142d37fe5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   aafd7f702c118       kube-controller-manager-no-preload-449580    kube-system
	6cf770e38746c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   a848e57954886       kube-scheduler-no-preload-449580             kube-system
	09d3164355d52       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   31d862254d9c6       kube-apiserver-no-preload-449580             kube-system
	da4d6ced5b128       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   4330ac863b0b2       etcd-no-preload-449580                       kube-system
	
	
	==> coredns [e4cdebb7a5f1e03ca1d6840a7e5d790daca58249854250430492d1c216465dc2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36746 - 18685 "HINFO IN 2838055078267949360.8804048191215482115. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.071273849s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-449580
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-449580
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=no-preload-449580
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_10_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:10:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-449580
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:12:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:11:50 +0000   Fri, 17 Oct 2025 20:10:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:11:50 +0000   Fri, 17 Oct 2025 20:10:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:11:50 +0000   Fri, 17 Oct 2025 20:10:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:11:50 +0000   Fri, 17 Oct 2025 20:10:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-449580
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                95a628c4-6711-4ed7-bc23-3a2b6d436bf1
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-p4n86                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-449580                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-9xg9h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-449580              250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-449580     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-m5g7f                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-449580              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gqppr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dkzr6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node no-preload-449580 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node no-preload-449580 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node no-preload-449580 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node no-preload-449580 event: Registered Node no-preload-449580 in Controller
	  Normal  NodeReady                94s                kubelet          Node no-preload-449580 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node no-preload-449580 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node no-preload-449580 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node no-preload-449580 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node no-preload-449580 event: Registered Node no-preload-449580 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [da4d6ced5b128794ebcf1eb3fba8085c8b428be8cc20e7b0cbbeb23351ceb4d4] <==
	{"level":"warn","ts":"2025-10-17T20:11:18.893408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.901392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.909998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.918518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.926006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.937351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.942246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.949309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.957063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.964962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.972331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.979974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.987675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.994391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.002290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.009467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.018454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.026646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.034939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.056234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.060584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.068152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.078304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.132435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50446","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:11:56.458311Z","caller":"traceutil/trace.go:172","msg":"trace[2109965745] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"136.693528ms","start":"2025-10-17T20:11:56.321592Z","end":"2025-10-17T20:11:56.458286Z","steps":["trace[2109965745] 'process raft request'  (duration: 78.619058ms)","trace[2109965745] 'compare'  (duration: 57.963023ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:12:13 up  1:54,  0 user,  load average: 3.49, 3.50, 2.37
	Linux no-preload-449580 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b2d438515e445e965a062ab1d3673eae9c240a5640ff6c902c5709be255d0b55] <==
	I1017 20:11:21.020972       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:11:21.021283       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 20:11:21.021442       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:11:21.021459       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:11:21.021481       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:11:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:11:21.222815       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:11:21.223336       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:11:21.223363       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:11:21.223501       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:11:21.620868       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:11:21.620928       1 metrics.go:72] Registering metrics
	I1017 20:11:21.621100       1 controller.go:711] "Syncing nftables rules"
	I1017 20:11:31.222812       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:11:31.222881       1 main.go:301] handling current node
	I1017 20:11:41.223031       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:11:41.223066       1 main.go:301] handling current node
	I1017 20:11:51.223013       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:11:51.223053       1 main.go:301] handling current node
	I1017 20:12:01.225941       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:12:01.225999       1 main.go:301] handling current node
	I1017 20:12:11.230831       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:12:11.230881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [09d3164355d524c8b81db0b45da6184b8608f2453c76034f04243ff5a2366382] <==
	I1017 20:11:19.640658       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:11:19.640665       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:11:19.640671       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:11:19.640712       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:11:19.640782       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:11:19.640521       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:11:19.640960       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:11:19.648336       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:11:19.648565       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:11:19.648637       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:11:19.650888       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1017 20:11:19.654802       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:11:19.659505       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 20:11:19.671501       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:11:19.892261       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:11:19.925268       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:11:19.946336       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:11:19.953600       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:11:19.963411       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:11:20.001103       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.245.242"}
	I1017 20:11:20.013269       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.202.59"}
	I1017 20:11:20.543474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:11:23.384228       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:11:23.432388       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:11:23.482799       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [344d142d37fe5e0cf83f172832d2f0380baafcfe5af95563d75af080c8f38c3c] <==
	I1017 20:11:22.928844       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:11:22.929139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:11:22.929201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:11:22.929212       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:11:22.929227       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:11:22.929405       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:11:22.929476       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:11:22.929594       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:11:22.929618       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:11:22.929717       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:11:22.929727       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:11:22.931358       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:11:22.931380       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:11:22.931473       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:11:22.931523       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:11:22.931534       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:11:22.931541       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:11:22.933583       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:11:22.933595       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:11:22.935808       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:11:22.938081       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:11:22.942399       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:11:22.943628       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:11:22.945787       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:11:22.996502       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2065ed557a2ff9e4311486d101858ee5b30b748b19f878da0d5158806d03a998] <==
	I1017 20:11:20.816065       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:11:20.882641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:11:20.983416       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:11:20.983456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 20:11:20.983591       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:11:21.003623       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:11:21.003685       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:11:21.009140       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:11:21.009965       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:11:21.010072       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:11:21.012322       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:11:21.012435       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:11:21.012356       1 config.go:200] "Starting service config controller"
	I1017 20:11:21.012525       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:11:21.012390       1 config.go:309] "Starting node config controller"
	I1017 20:11:21.012539       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:11:21.012729       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:11:21.012438       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:11:21.012797       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:11:21.112695       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:11:21.112704       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:11:21.113872       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6cf770e38746c4716bb308f95e151bdd97000b0a2142f8c26a0763b88060594f] <==
	I1017 20:11:18.902246       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:11:19.827163       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:11:19.827205       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:11:19.832625       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:11:19.832643       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:11:19.832653       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:11:19.832670       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:11:19.832662       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:11:19.832730       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:11:19.833135       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:11:19.833200       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:11:19.932864       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:11:19.932864       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:11:19.932876       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 17 20:11:23 no-preload-449580 kubelet[711]: I1017 20:11:23.696334     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh67t\" (UniqueName: \"kubernetes.io/projected/45832212-ca15-41a2-a9e8-9fc966fee3c2-kube-api-access-mh67t\") pod \"dashboard-metrics-scraper-6ffb444bf9-gqppr\" (UID: \"45832212-ca15-41a2-a9e8-9fc966fee3c2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr"
	Oct 17 20:11:23 no-preload-449580 kubelet[711]: I1017 20:11:23.696360     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/92cf2d50-aa83-4686-8f20-055646b5e2b8-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dkzr6\" (UID: \"92cf2d50-aa83-4686-8f20-055646b5e2b8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dkzr6"
	Oct 17 20:11:26 no-preload-449580 kubelet[711]: I1017 20:11:26.242828     711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 17 20:11:26 no-preload-449580 kubelet[711]: I1017 20:11:26.465496     711 scope.go:117] "RemoveContainer" containerID="56d81e40f46f24f50d2a02715f183704856dfaee453faff850e09400d5a45421"
	Oct 17 20:11:27 no-preload-449580 kubelet[711]: I1017 20:11:27.470419     711 scope.go:117] "RemoveContainer" containerID="56d81e40f46f24f50d2a02715f183704856dfaee453faff850e09400d5a45421"
	Oct 17 20:11:27 no-preload-449580 kubelet[711]: I1017 20:11:27.470576     711 scope.go:117] "RemoveContainer" containerID="210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140"
	Oct 17 20:11:27 no-preload-449580 kubelet[711]: E1017 20:11:27.470820     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:11:28 no-preload-449580 kubelet[711]: I1017 20:11:28.475120     711 scope.go:117] "RemoveContainer" containerID="210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140"
	Oct 17 20:11:28 no-preload-449580 kubelet[711]: E1017 20:11:28.475303     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:11:30 no-preload-449580 kubelet[711]: I1017 20:11:30.155296     711 scope.go:117] "RemoveContainer" containerID="210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140"
	Oct 17 20:11:30 no-preload-449580 kubelet[711]: E1017 20:11:30.155522     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:11:31 no-preload-449580 kubelet[711]: I1017 20:11:31.493041     711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dkzr6" podStartSLOduration=1.732910111 podStartE2EDuration="8.493019048s" podCreationTimestamp="2025-10-17 20:11:23 +0000 UTC" firstStartedPulling="2025-10-17 20:11:23.937391771 +0000 UTC m=+6.639130317" lastFinishedPulling="2025-10-17 20:11:30.697500707 +0000 UTC m=+13.399239254" observedRunningTime="2025-10-17 20:11:31.493010386 +0000 UTC m=+14.194748950" watchObservedRunningTime="2025-10-17 20:11:31.493019048 +0000 UTC m=+14.194757612"
	Oct 17 20:11:42 no-preload-449580 kubelet[711]: I1017 20:11:42.416174     711 scope.go:117] "RemoveContainer" containerID="210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140"
	Oct 17 20:11:42 no-preload-449580 kubelet[711]: I1017 20:11:42.510923     711 scope.go:117] "RemoveContainer" containerID="210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140"
	Oct 17 20:11:42 no-preload-449580 kubelet[711]: I1017 20:11:42.511161     711 scope.go:117] "RemoveContainer" containerID="caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4"
	Oct 17 20:11:42 no-preload-449580 kubelet[711]: E1017 20:11:42.511354     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:11:50 no-preload-449580 kubelet[711]: I1017 20:11:50.155296     711 scope.go:117] "RemoveContainer" containerID="caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4"
	Oct 17 20:11:50 no-preload-449580 kubelet[711]: E1017 20:11:50.155986     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:11:51 no-preload-449580 kubelet[711]: I1017 20:11:51.534969     711 scope.go:117] "RemoveContainer" containerID="fdcad2e90c8dcf59aada3333930294077886b20dc4ffa931ec9d1f20d86de19d"
	Oct 17 20:12:01 no-preload-449580 kubelet[711]: I1017 20:12:01.415885     711 scope.go:117] "RemoveContainer" containerID="caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4"
	Oct 17 20:12:01 no-preload-449580 kubelet[711]: E1017 20:12:01.416108     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:12:10 no-preload-449580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:12:10 no-preload-449580 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:12:10 no-preload-449580 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 20:12:10 no-preload-449580 systemd[1]: kubelet.service: Consumed 1.706s CPU time.
	
	
	==> kubernetes-dashboard [1995d053f3c779ae7a5d37d3f2392fc388fb7eaf8a318c4c16bc4e63cc6cd09b] <==
	2025/10/17 20:11:30 Starting overwatch
	2025/10/17 20:11:30 Using namespace: kubernetes-dashboard
	2025/10/17 20:11:30 Using in-cluster config to connect to apiserver
	2025/10/17 20:11:30 Using secret token for csrf signing
	2025/10/17 20:11:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:11:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:11:30 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:11:30 Generating JWE encryption key
	2025/10/17 20:11:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:11:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:11:30 Initializing JWE encryption key from synchronized object
	2025/10/17 20:11:30 Creating in-cluster Sidecar client
	2025/10/17 20:11:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:11:30 Serving insecurely on HTTP port: 9090
	2025/10/17 20:12:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [657fe0dbb4b0cba7157b7d8d6dd281cba239e2b86568e955ef7820a3d73b740f] <==
	I1017 20:11:51.586370       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:11:51.595885       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:11:51.595937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:11:51.598364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:11:55.052996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:11:59.313182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:02.911813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:05.966325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:08.988907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:08.994822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:12:08.994975       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:12:08.995056       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0252561b-3175-478a-ae66-c43f417b884b", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-449580_4bb0b45a-bae7-4485-afaf-0842c5c38fde became leader
	I1017 20:12:08.995157       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-449580_4bb0b45a-bae7-4485-afaf-0842c5c38fde!
	W1017 20:12:08.998028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:09.003433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:12:09.095651       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-449580_4bb0b45a-bae7-4485-afaf-0842c5c38fde!
	W1017 20:12:11.006863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:11.012034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:13.016950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:13.023519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fdcad2e90c8dcf59aada3333930294077886b20dc4ffa931ec9d1f20d86de19d] <==
	I1017 20:11:20.786938       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:11:50.789140       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-449580 -n no-preload-449580
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-449580 -n no-preload-449580: exit status 2 (361.628912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-449580 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-449580
helpers_test.go:243: (dbg) docker inspect no-preload-449580:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb",
	        "Created": "2025-10-17T20:09:52.380878563Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 369903,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:11:09.555461475Z",
	            "FinishedAt": "2025-10-17T20:11:08.726874589Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/hosts",
	        "LogPath": "/var/lib/docker/containers/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb/11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb-json.log",
	        "Name": "/no-preload-449580",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-449580:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-449580",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11713a3ef64d9f6151897cf282bcb9e2b9c9e4e27487f09796f25e824af057eb",
	                "LowerDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7ad98093ee207252ec827bedcd754cea7ba300950ae4070abdafab8792e4b46/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-449580",
	                "Source": "/var/lib/docker/volumes/no-preload-449580/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-449580",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-449580",
	                "name.minikube.sigs.k8s.io": "no-preload-449580",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "421ae79c3faa207b2636cb7cbd1afde746b1c221b0a298f154415a66dec8fc3d",
	            "SandboxKey": "/var/run/docker/netns/421ae79c3faa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-449580": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:38:8d:43:88:9d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b82ebd045e12b91841d651f11549608344307c54224bf0d85f675490a33cca93",
	                    "EndpointID": "7ffbb798f3421d91e64321b56d0ca6d197c9fbedd8cfa5316ca3e704d6a91a12",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-449580",
	                        "11713a3ef64d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449580 -n no-preload-449580
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449580 -n no-preload-449580: exit status 2 (330.444524ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-449580 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-449580 logs -n 25: (1.379925553s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p running-upgrade-097245 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p missing-upgrade-159057                                                                                                                                                                                                                     │ missing-upgrade-159057    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p force-systemd-flag-599050 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p running-upgrade-097245                                                                                                                                                                                                                     │ running-upgrade-097245    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ ssh     │ force-systemd-flag-599050 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p force-systemd-flag-599050                                                                                                                                                                                                                  │ force-systemd-flag-599050 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-726816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p old-k8s-version-726816 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-726816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-449580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p no-preload-449580 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable dashboard -p no-preload-449580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ image   │ old-k8s-version-726816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ pause   │ -p old-k8s-version-726816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816    │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488        │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	│ start   │ -p cert-expiration-202048 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-202048    │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ image   │ no-preload-449580 image list --format=json                                                                                                                                                                                                    │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ pause   │ -p no-preload-449580 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-449580         │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ delete  │ -p cert-expiration-202048                                                                                                                                                                                                                     │ cert-expiration-202048    │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:12:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:12:05.364535  379394 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:12:05.364806  379394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:12:05.364810  379394 out.go:374] Setting ErrFile to fd 2...
	I1017 20:12:05.364816  379394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:12:05.365107  379394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:12:05.365679  379394 out.go:368] Setting JSON to false
	I1017 20:12:05.367244  379394 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6873,"bootTime":1760725052,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:12:05.367425  379394 start.go:141] virtualization: kvm guest
	I1017 20:12:05.369905  379394 out.go:179] * [cert-expiration-202048] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:12:05.371442  379394 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:12:05.371426  379394 notify.go:220] Checking for updates...
	I1017 20:12:05.373013  379394 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:12:05.374467  379394 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:12:05.375976  379394 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:12:05.377314  379394 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:12:05.378648  379394 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:12:05.380453  379394 config.go:182] Loaded profile config "cert-expiration-202048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:05.381232  379394 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:12:05.414382  379394 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:12:05.414539  379394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:12:05.490109  379394 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 20:12:05.478083658 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:12:05.490205  379394 docker.go:318] overlay module found
	I1017 20:12:05.494008  379394 out.go:179] * Using the docker driver based on existing profile
	I1017 20:12:05.495924  379394 start.go:305] selected driver: docker
	I1017 20:12:05.495938  379394 start.go:925] validating driver "docker" against &{Name:cert-expiration-202048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-202048 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:05.496089  379394 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:12:05.496918  379394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:12:05.565931  379394 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 20:12:05.553819033 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:12:05.566268  379394 cni.go:84] Creating CNI manager for ""
	I1017 20:12:05.566346  379394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:05.566393  379394 start.go:349] cluster config:
	{Name:cert-expiration-202048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-202048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:05.569263  379394 out.go:179] * Starting "cert-expiration-202048" primary control-plane node in "cert-expiration-202048" cluster
	I1017 20:12:05.570667  379394 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:12:05.572167  379394 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:12:05.573537  379394 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:05.573579  379394 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:12:05.573595  379394 cache.go:58] Caching tarball of preloaded images
	I1017 20:12:05.573648  379394 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:12:05.573713  379394 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:12:05.573723  379394 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:12:05.573859  379394 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/config.json ...
	I1017 20:12:05.595769  379394 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:12:05.595785  379394 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:12:05.595804  379394 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:12:05.595833  379394 start.go:360] acquireMachinesLock for cert-expiration-202048: {Name:mkeb350189e5dcd93a71dc9a551cd333325075c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:12:05.595899  379394 start.go:364] duration metric: took 46.623µs to acquireMachinesLock for "cert-expiration-202048"
	I1017 20:12:05.595917  379394 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:12:05.595922  379394 fix.go:54] fixHost starting: 
	I1017 20:12:05.596143  379394 cli_runner.go:164] Run: docker container inspect cert-expiration-202048 --format={{.State.Status}}
	I1017 20:12:05.614896  379394 fix.go:112] recreateIfNeeded on cert-expiration-202048: state=Running err=<nil>
	W1017 20:12:05.614929  379394 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:12:04.899994  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:12:04.900450  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:12:04.900511  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:12:04.900571  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:12:04.929112  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:04.929143  344862 cri.go:89] found id: ""
	I1017 20:12:04.929155  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:12:04.929219  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:04.933579  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:12:04.933650  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:12:04.963457  344862 cri.go:89] found id: ""
	I1017 20:12:04.963492  344862 logs.go:282] 0 containers: []
	W1017 20:12:04.963505  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:12:04.963512  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:12:04.963576  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:12:04.994022  344862 cri.go:89] found id: ""
	I1017 20:12:04.994050  344862 logs.go:282] 0 containers: []
	W1017 20:12:04.994062  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:12:04.994075  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:12:04.994147  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:12:05.023819  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:05.023846  344862 cri.go:89] found id: ""
	I1017 20:12:05.023857  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:12:05.023926  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:05.028219  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:12:05.028287  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:12:05.059680  344862 cri.go:89] found id: ""
	I1017 20:12:05.059711  344862 logs.go:282] 0 containers: []
	W1017 20:12:05.059722  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:12:05.059730  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:12:05.059811  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:12:05.089000  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:05.089023  344862 cri.go:89] found id: ""
	I1017 20:12:05.089031  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:12:05.089092  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:05.093375  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:12:05.093452  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:12:05.122730  344862 cri.go:89] found id: ""
	I1017 20:12:05.122774  344862 logs.go:282] 0 containers: []
	W1017 20:12:05.122786  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:12:05.122795  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:12:05.122858  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:12:05.152104  344862 cri.go:89] found id: ""
	I1017 20:12:05.152138  344862 logs.go:282] 0 containers: []
	W1017 20:12:05.152152  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:12:05.152163  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:05.152178  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:12:05.199349  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:05.199388  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:05.234068  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:12:05.234101  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:12:05.322838  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:05.322874  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:05.344060  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:12:05.344091  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:12:05.412759  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:12:05.412782  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:12:05.412800  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:05.462272  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:05.462323  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:05.532655  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:12:05.532717  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:03.008445  376518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.crt.8a4f5dce ...
	I1017 20:12:03.008475  376518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.crt.8a4f5dce: {Name:mk81a89ba9e4fdfb95ee5422fb1576cd0840c0d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:03.008674  376518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.key.8a4f5dce ...
	I1017 20:12:03.008691  376518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.key.8a4f5dce: {Name:mk357bd95fee2a329f370077c6a642cb4659a2ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:03.008828  376518 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.crt.8a4f5dce -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.crt
	I1017 20:12:03.008940  376518 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.key.8a4f5dce -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.key
	I1017 20:12:03.009032  376518 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.key
	I1017 20:12:03.009053  376518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.crt with IP's: []
	I1017 20:12:03.340983  376518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.crt ...
	I1017 20:12:03.341011  376518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.crt: {Name:mk5676468906393c987db64c1bb5ac4d5655daed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:03.341183  376518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.key ...
	I1017 20:12:03.341196  376518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.key: {Name:mkf55535b7cb3665bf3c84db43c37e9a25a285ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:03.341386  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:12:03.341424  376518 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:12:03.341431  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:12:03.341456  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:12:03.341478  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:12:03.341499  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:12:03.341539  376518 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:03.342170  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:12:03.362349  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:12:03.382466  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:12:03.401412  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:12:03.420856  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1017 20:12:03.441327  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:12:03.461491  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:12:03.481584  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:12:03.501450  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:12:03.523319  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:12:03.543170  376518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:12:03.562794  376518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:12:03.578364  376518 ssh_runner.go:195] Run: openssl version
	I1017 20:12:03.584970  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:12:03.594702  376518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:12:03.599190  376518 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:12:03.599256  376518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:12:03.634612  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:12:03.643985  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:12:03.653093  376518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:03.657446  376518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:03.657511  376518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:03.693092  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:12:03.702729  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:12:03.712510  376518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:12:03.717555  376518 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:12:03.717622  376518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:12:03.754542  376518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:12:03.763952  376518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:12:03.767937  376518 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:12:03.767998  376518 kubeadm.go:400] StartCluster: {Name:embed-certs-051488 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-051488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:03.768078  376518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:12:03.768132  376518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:12:03.797173  376518 cri.go:89] found id: ""
	I1017 20:12:03.797243  376518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:12:03.806231  376518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:12:03.814593  376518 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:12:03.814651  376518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:12:03.823363  376518 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:12:03.823392  376518 kubeadm.go:157] found existing configuration files:
	
	I1017 20:12:03.823447  376518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:12:03.831826  376518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:12:03.831888  376518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:12:03.840231  376518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:12:03.849118  376518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:12:03.849176  376518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:12:03.857308  376518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:12:03.865537  376518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:12:03.865594  376518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:12:03.873870  376518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:12:03.883348  376518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:12:03.883415  376518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:12:03.892654  376518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:12:03.959630  376518 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 20:12:04.024087  376518 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:12:05.617002  379394 out.go:252] * Updating the running docker "cert-expiration-202048" container ...
	I1017 20:12:05.617039  379394 machine.go:93] provisionDockerMachine start ...
	I1017 20:12:05.617125  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:05.636143  379394 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:05.636360  379394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1017 20:12:05.636366  379394 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:12:05.774043  379394 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-202048
	
	I1017 20:12:05.774074  379394 ubuntu.go:182] provisioning hostname "cert-expiration-202048"
	I1017 20:12:05.774138  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:05.793892  379394 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:05.794125  379394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1017 20:12:05.794134  379394 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-202048 && echo "cert-expiration-202048" | sudo tee /etc/hostname
	I1017 20:12:05.943981  379394 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-202048
	
	I1017 20:12:05.944073  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:05.964393  379394 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:05.964701  379394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1017 20:12:05.964722  379394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-202048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-202048/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-202048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:12:06.104266  379394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:12:06.104290  379394 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:12:06.104310  379394 ubuntu.go:190] setting up certificates
	I1017 20:12:06.104320  379394 provision.go:84] configureAuth start
	I1017 20:12:06.104374  379394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-202048
	I1017 20:12:06.126268  379394 provision.go:143] copyHostCerts
	I1017 20:12:06.126332  379394 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:12:06.126345  379394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:12:06.126411  379394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:12:06.126515  379394 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:12:06.126519  379394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:12:06.126544  379394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:12:06.126614  379394 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:12:06.126617  379394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:12:06.126638  379394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:12:06.126697  379394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-202048 san=[127.0.0.1 192.168.85.2 cert-expiration-202048 localhost minikube]
	I1017 20:12:06.234140  379394 provision.go:177] copyRemoteCerts
	I1017 20:12:06.234200  379394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:12:06.234233  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:06.253484  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:06.354265  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:12:06.375482  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1017 20:12:06.397465  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:12:06.418758  379394 provision.go:87] duration metric: took 314.409814ms to configureAuth
	I1017 20:12:06.418782  379394 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:12:06.418967  379394 config.go:182] Loaded profile config "cert-expiration-202048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:06.419052  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:06.440072  379394 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:06.440305  379394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1017 20:12:06.440331  379394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:12:06.763041  379394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:12:06.763059  379394 machine.go:96] duration metric: took 1.146012668s to provisionDockerMachine
	I1017 20:12:06.763072  379394 start.go:293] postStartSetup for "cert-expiration-202048" (driver="docker")
	I1017 20:12:06.763085  379394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:12:06.763163  379394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:12:06.763207  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:06.782548  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:06.882199  379394 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:12:06.886354  379394 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:12:06.886379  379394 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:12:06.886392  379394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:12:06.886458  379394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:12:06.886546  379394 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:12:06.886663  379394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:12:06.896350  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:06.917281  379394 start.go:296] duration metric: took 154.191627ms for postStartSetup
	I1017 20:12:06.917379  379394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:12:06.917419  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:06.938672  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:07.034613  379394 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:12:07.040083  379394 fix.go:56] duration metric: took 1.44415046s for fixHost
	I1017 20:12:07.040105  379394 start.go:83] releasing machines lock for "cert-expiration-202048", held for 1.444197486s
	I1017 20:12:07.040202  379394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-202048
	I1017 20:12:07.059140  379394 ssh_runner.go:195] Run: cat /version.json
	I1017 20:12:07.059191  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:07.059211  379394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:12:07.059259  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:07.079960  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:07.080002  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:07.176508  379394 ssh_runner.go:195] Run: systemctl --version
	I1017 20:12:07.242813  379394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:12:07.282734  379394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:12:07.288191  379394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:12:07.288258  379394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:12:07.297579  379394 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:12:07.297600  379394 start.go:495] detecting cgroup driver to use...
	I1017 20:12:07.297635  379394 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:12:07.297686  379394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:12:07.313866  379394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:12:07.327689  379394 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:12:07.327769  379394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:12:07.344276  379394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:12:07.358938  379394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:12:07.484256  379394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:12:07.603067  379394 docker.go:234] disabling docker service ...
	I1017 20:12:07.603119  379394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:12:07.620392  379394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:12:07.635114  379394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:12:07.759185  379394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:12:07.876416  379394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:12:07.890218  379394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:12:07.906594  379394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:12:07.906648  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.917970  379394 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:12:07.918062  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.928773  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.939654  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.949962  379394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:12:07.959959  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.977414  379394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.987258  379394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:07.997929  379394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:12:08.007320  379394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:12:08.015855  379394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:08.155227  379394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:12:08.342347  379394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:12:08.342404  379394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:12:08.346847  379394 start.go:563] Will wait 60s for crictl version
	I1017 20:12:08.346897  379394 ssh_runner.go:195] Run: which crictl
	I1017 20:12:08.351338  379394 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:12:08.380244  379394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:12:08.380321  379394 ssh_runner.go:195] Run: crio --version
	I1017 20:12:08.413225  379394 ssh_runner.go:195] Run: crio --version
	I1017 20:12:08.447822  379394 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:12:08.449406  379394 cli_runner.go:164] Run: docker network inspect cert-expiration-202048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:12:08.471543  379394 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 20:12:08.476539  379394 kubeadm.go:883] updating cluster {Name:cert-expiration-202048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-202048 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:12:08.476671  379394 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:08.476727  379394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:08.515853  379394 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:08.515867  379394 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:12:08.515923  379394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:08.548446  379394 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:08.548462  379394 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:12:08.548471  379394 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1017 20:12:08.548596  379394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-202048 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-202048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:12:08.548673  379394 ssh_runner.go:195] Run: crio config
	I1017 20:12:08.619390  379394 cni.go:84] Creating CNI manager for ""
	I1017 20:12:08.619404  379394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:08.619420  379394 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:12:08.619467  379394 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-202048 NodeName:cert-expiration-202048 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:12:08.619641  379394 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-202048"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:12:08.619710  379394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:12:08.632175  379394 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:12:08.632237  379394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:12:08.642841  379394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1017 20:12:08.659652  379394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:12:08.676528  379394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1017 20:12:08.691677  379394 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:12:08.695846  379394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:08.825164  379394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:12:08.838907  379394 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048 for IP: 192.168.85.2
	I1017 20:12:08.838920  379394 certs.go:195] generating shared ca certs ...
	I1017 20:12:08.838934  379394 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:08.839105  379394 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:12:08.839149  379394 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:12:08.839155  379394 certs.go:257] generating profile certs ...
	W1017 20:12:08.839277  379394 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1017 20:12:08.839299  379394 certs.go:624] cert expired /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.crt: expiration: 2025-10-17 20:11:50 +0000 UTC, now: 2025-10-17 20:12:08.839293005 +0000 UTC m=+3.521950829
	I1017 20:12:08.839424  379394 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.key
	I1017 20:12:08.839447  379394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.crt with IP's: []
	I1017 20:12:09.922147  379394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.crt ...
	I1017 20:12:09.922172  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.crt: {Name:mk9f34fdcdf0239482f54154fd9e382dad0e337b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:09.922367  379394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.key ...
	I1017 20:12:09.922384  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/client.key: {Name:mkc8709422661d1292ff8c373c083118adc912e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1017 20:12:09.922620  379394 out.go:285] ! Certificate apiserver.crt.05eb0b2e has expired. Generating a new one...
	I1017 20:12:09.922642  379394 certs.go:624] cert expired /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt.05eb0b2e: expiration: 2025-10-17 20:11:50 +0000 UTC, now: 2025-10-17 20:12:09.922635172 +0000 UTC m=+4.605292995
	I1017 20:12:09.922763  379394 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key.05eb0b2e
	I1017 20:12:09.922782  379394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt.05eb0b2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1017 20:12:10.705875  379394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt.05eb0b2e ...
	I1017 20:12:10.705900  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt.05eb0b2e: {Name:mk2123b7facb3f90f5910d798f6cfbb4edbf0768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:10.706067  379394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key.05eb0b2e ...
	I1017 20:12:10.706079  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key.05eb0b2e: {Name:mk6326cde57a7fbe0db60280bf5e7459790a7539 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:10.706166  379394 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt.05eb0b2e -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt
	I1017 20:12:10.706349  379394 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key.05eb0b2e -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key
	W1017 20:12:10.706558  379394 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1017 20:12:10.706580  379394 certs.go:624] cert expired /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.crt: expiration: 2025-10-17 20:11:50 +0000 UTC, now: 2025-10-17 20:12:10.706572146 +0000 UTC m=+5.389229972
	I1017 20:12:10.706668  379394 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.key
	I1017 20:12:10.706693  379394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.crt with IP's: []
	I1017 20:12:10.799084  379394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.crt ...
	I1017 20:12:10.799103  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.crt: {Name:mka0331a625f20ecc1c1fe9c075609218787e89d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:10.799243  379394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.key ...
	I1017 20:12:10.799251  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.key: {Name:mk6b8b6c1d33a1cc431ca3873618ea9a7b9fd956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:10.799417  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:12:10.799448  379394 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:12:10.799454  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:12:10.799476  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:12:10.799498  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:12:10.799516  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:12:10.799551  379394 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:10.800170  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:12:10.827565  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:12:10.850951  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:12:10.874513  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:12:10.895447  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1017 20:12:10.925615  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:12:10.949443  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:12:10.972544  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/cert-expiration-202048/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:12:10.997917  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:12:11.020290  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:12:11.040629  379394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:12:11.069907  379394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:12:11.085415  379394 ssh_runner.go:195] Run: openssl version
	I1017 20:12:11.092421  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:12:11.102163  379394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:12:11.106317  379394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:12:11.106379  379394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:12:11.149834  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:12:11.159508  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:12:11.169068  379394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:11.173294  379394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:11.173356  379394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:11.230143  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:12:11.240806  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:12:11.252293  379394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:12:11.256757  379394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:12:11.256811  379394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:12:11.308008  379394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:12:11.320637  379394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:12:11.327649  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:12:11.369961  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:12:11.424851  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:12:11.482154  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:12:11.526249  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:12:11.575512  379394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:12:11.629906  379394 kubeadm.go:400] StartCluster: {Name:cert-expiration-202048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-202048 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:11.629999  379394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:12:11.630090  379394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:12:11.661827  379394 cri.go:89] found id: "9781483db16aa60d939b68115ffc0d63fa8e9f9decddc2d9d577e597adfe0c8d"
	I1017 20:12:11.661854  379394 cri.go:89] found id: "9088d862c033e96c67db030cb3c98053254b4e1c7d3cc60b9da67682246423ce"
	I1017 20:12:11.661858  379394 cri.go:89] found id: "4c9fe1d4de02d3e2a08e5e7ac998200b0cfa688626d41b9372b316c6d80e099f"
	I1017 20:12:11.661862  379394 cri.go:89] found id: "61e19a54d084a7370b2b05fd3684396dd3b89390517bfd29786c6f87a60c4e2a"
	I1017 20:12:11.661866  379394 cri.go:89] found id: "22276078f99298494af62eaf58e20b749dc111d4521573054cc3230e54426ea3"
	I1017 20:12:11.661870  379394 cri.go:89] found id: "47c5210d4f132802b5ce2ac954a76ac57878435d16216ff828796ce34f9a70bf"
	I1017 20:12:11.661873  379394 cri.go:89] found id: "442bdd88922d2c075abb61d1c000af32fadeacf6aa883612a19a60bc701a4ec1"
	I1017 20:12:11.661875  379394 cri.go:89] found id: "d437f4da5b6f02ca2d4e81fc31e407b12d81134c7c6f7a1d22e679e9e96237ca"
	I1017 20:12:11.661878  379394 cri.go:89] found id: ""
	I1017 20:12:11.661922  379394 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:12:11.675236  379394 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:12:11Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:12:11.675303  379394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:12:11.685164  379394 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:12:11.685178  379394 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:12:11.685242  379394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:12:11.695145  379394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:12:11.695906  379394 kubeconfig.go:125] found "cert-expiration-202048" server: "https://192.168.85.2:8443"
	I1017 20:12:11.697958  379394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:12:11.707208  379394 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1017 20:12:11.707240  379394 kubeadm.go:601] duration metric: took 22.056379ms to restartPrimaryControlPlane
	I1017 20:12:11.707249  379394 kubeadm.go:402] duration metric: took 77.353296ms to StartCluster
	I1017 20:12:11.707270  379394 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:11.707339  379394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:12:11.708472  379394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:11.708721  379394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:12:11.708852  379394 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:12:11.708944  379394 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-202048"
	I1017 20:12:11.708961  379394 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-202048"
	W1017 20:12:11.708967  379394 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:12:11.708968  379394 config.go:182] Loaded profile config "cert-expiration-202048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:11.708983  379394 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-202048"
	I1017 20:12:11.708996  379394 host.go:66] Checking if "cert-expiration-202048" exists ...
	I1017 20:12:11.709002  379394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-202048"
	I1017 20:12:11.709358  379394 cli_runner.go:164] Run: docker container inspect cert-expiration-202048 --format={{.State.Status}}
	I1017 20:12:11.709490  379394 cli_runner.go:164] Run: docker container inspect cert-expiration-202048 --format={{.State.Status}}
	I1017 20:12:11.711768  379394 out.go:179] * Verifying Kubernetes components...
	I1017 20:12:11.713341  379394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:11.735195  379394 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-202048"
	W1017 20:12:11.735209  379394 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:12:11.735239  379394 host.go:66] Checking if "cert-expiration-202048" exists ...
	I1017 20:12:11.735728  379394 cli_runner.go:164] Run: docker container inspect cert-expiration-202048 --format={{.State.Status}}
	I1017 20:12:11.736236  379394 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:12:11.737526  379394 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:12:11.737537  379394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:12:11.737590  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:11.767958  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:11.770858  379394 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:12:11.770889  379394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:12:11.770976  379394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-202048
	I1017 20:12:11.796604  379394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/cert-expiration-202048/id_rsa Username:docker}
	I1017 20:12:11.872904  379394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:12:11.883310  379394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:12:11.888924  379394 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:12:11.889012  379394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:12:11.913402  379394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:12:12.500842  379394 api_server.go:72] duration metric: took 792.077624ms to wait for apiserver process to appear ...
	I1017 20:12:12.500858  379394 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:12:12.500882  379394 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1017 20:12:12.506566  379394 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1017 20:12:12.513017  379394 api_server.go:141] control plane version: v1.34.1
	I1017 20:12:12.513039  379394 api_server.go:131] duration metric: took 12.174842ms to wait for apiserver health ...
	I1017 20:12:12.513048  379394 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:12:12.517421  379394 system_pods.go:59] 8 kube-system pods found
	I1017 20:12:12.517441  379394 system_pods.go:61] "coredns-66bc5c9577-ldjtc" [7dc404ca-d345-46bc-813d-be58549f71f9] Running
	I1017 20:12:12.517447  379394 system_pods.go:61] "etcd-cert-expiration-202048" [d08f58b5-250c-4d52-a1b4-f2cf890b3dc3] Running
	I1017 20:12:12.517452  379394 system_pods.go:61] "kindnet-kk7zm" [2fcec08a-59b9-49d8-b2d5-1ef44ce40d98] Running
	I1017 20:12:12.517456  379394 system_pods.go:61] "kube-apiserver-cert-expiration-202048" [d7fac238-a4e6-49b8-a7bf-ed74db70a7d1] Running
	I1017 20:12:12.517460  379394 system_pods.go:61] "kube-controller-manager-cert-expiration-202048" [8ec860ee-e5bd-4d33-bb6b-74d08285576e] Running
	I1017 20:12:12.517463  379394 system_pods.go:61] "kube-proxy-65qn7" [314c1c46-c076-4a55-bc55-eb6d6f007c2a] Running
	I1017 20:12:12.517467  379394 system_pods.go:61] "kube-scheduler-cert-expiration-202048" [4241bdea-79e8-4296-a11e-eb48c5d53828] Running
	I1017 20:12:12.517470  379394 system_pods.go:61] "storage-provisioner" [5644bce6-b169-41e3-a4c8-008d252183f2] Running
	I1017 20:12:12.517477  379394 system_pods.go:74] duration metric: took 4.422455ms to wait for pod list to return data ...
	I1017 20:12:12.517489  379394 kubeadm.go:586] duration metric: took 808.730936ms to wait for: map[apiserver:true system_pods:true]
	I1017 20:12:12.517502  379394 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:12:12.518878  379394 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 20:12:12.520493  379394 addons.go:514] duration metric: took 811.645032ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 20:12:12.520596  379394 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:12:12.520610  379394 node_conditions.go:123] node cpu capacity is 8
	I1017 20:12:12.520624  379394 node_conditions.go:105] duration metric: took 3.118172ms to run NodePressure ...
	I1017 20:12:12.520640  379394 start.go:241] waiting for startup goroutines ...
	I1017 20:12:12.520647  379394 start.go:246] waiting for cluster config update ...
	I1017 20:12:12.520657  379394 start.go:255] writing updated cluster config ...
	I1017 20:12:12.520958  379394 ssh_runner.go:195] Run: rm -f paused
	I1017 20:12:12.586236  379394 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:12:12.587735  379394 out.go:179] * Done! kubectl is now configured to use "cert-expiration-202048" cluster and "default" namespace by default
	I1017 20:12:08.068479  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:12:08.069026  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:12:08.069116  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:12:08.069181  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:12:08.101085  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:08.101116  344862 cri.go:89] found id: ""
	I1017 20:12:08.101128  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:12:08.101179  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:08.105381  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:12:08.105461  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:12:08.138838  344862 cri.go:89] found id: ""
	I1017 20:12:08.138870  344862 logs.go:282] 0 containers: []
	W1017 20:12:08.138878  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:12:08.138884  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:12:08.138944  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:12:08.168947  344862 cri.go:89] found id: ""
	I1017 20:12:08.168978  344862 logs.go:282] 0 containers: []
	W1017 20:12:08.168989  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:12:08.168997  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:12:08.169055  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:12:08.203841  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:08.203865  344862 cri.go:89] found id: ""
	I1017 20:12:08.203875  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:12:08.203939  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:08.208752  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:12:08.208828  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:12:08.242919  344862 cri.go:89] found id: ""
	I1017 20:12:08.242949  344862 logs.go:282] 0 containers: []
	W1017 20:12:08.242960  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:12:08.242968  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:12:08.243067  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:12:08.274983  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:08.275011  344862 cri.go:89] found id: ""
	I1017 20:12:08.275022  344862 logs.go:282] 1 containers: [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:12:08.275097  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:08.280349  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:12:08.280433  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:12:08.310687  344862 cri.go:89] found id: ""
	I1017 20:12:08.310713  344862 logs.go:282] 0 containers: []
	W1017 20:12:08.310725  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:12:08.310733  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:12:08.310811  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:12:08.341205  344862 cri.go:89] found id: ""
	I1017 20:12:08.341233  344862 logs.go:282] 0 containers: []
	W1017 20:12:08.341244  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:12:08.341256  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:12:08.341272  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:08.379684  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:08.379724  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:08.440421  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:12:08.440471  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:08.471988  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:08.472068  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:12:08.526122  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:08.526168  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:08.563398  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:12:08.563443  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:12:08.684906  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:08.684944  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:08.707119  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:12:08.707156  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:12:08.788069  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:12:11.288827  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:12:11.289306  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:12:11.289388  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:12:11.289451  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:12:11.335392  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:11.335419  344862 cri.go:89] found id: ""
	I1017 20:12:11.335431  344862 logs.go:282] 1 containers: [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:12:11.335498  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:11.339683  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:12:11.339796  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:12:11.369683  344862 cri.go:89] found id: ""
	I1017 20:12:11.369718  344862 logs.go:282] 0 containers: []
	W1017 20:12:11.369728  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:12:11.369753  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:12:11.369812  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:12:11.413401  344862 cri.go:89] found id: ""
	I1017 20:12:11.413438  344862 logs.go:282] 0 containers: []
	W1017 20:12:11.413450  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:12:11.413458  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:12:11.413520  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:12:11.455271  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:11.455324  344862 cri.go:89] found id: ""
	I1017 20:12:11.455335  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:12:11.455480  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:11.463385  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:12:11.463468  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:12:11.500625  344862 cri.go:89] found id: ""
	I1017 20:12:11.500656  344862 logs.go:282] 0 containers: []
	W1017 20:12:11.500668  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:12:11.500675  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:12:11.500734  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:12:11.534391  344862 cri.go:89] found id: "a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:11.534420  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:11.534426  344862 cri.go:89] found id: ""
	I1017 20:12:11.534436  344862 logs.go:282] 2 containers: [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:12:11.534501  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:11.538876  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:11.543690  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:12:11.543771  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:12:11.580930  344862 cri.go:89] found id: ""
	I1017 20:12:11.580961  344862 logs.go:282] 0 containers: []
	W1017 20:12:11.580973  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:12:11.580982  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:12:11.581043  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:12:11.626474  344862 cri.go:89] found id: ""
	I1017 20:12:11.626510  344862 logs.go:282] 0 containers: []
	W1017 20:12:11.626521  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:12:11.626540  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:12:11.626555  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:11.665166  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:12:11.665204  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:11.697002  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:11.697036  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:12:11.769048  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:11.769087  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:11.806949  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:11.806979  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:11.832574  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:12:11.832616  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:12:11.909580  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:12:11.909606  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:11.909624  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:11.992157  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:12:11.992190  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:12.027605  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:12:12.027633  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:12:14.466151  376518 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:12:14.466271  376518 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:12:14.466436  376518 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:12:14.466553  376518 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 20:12:14.466619  376518 kubeadm.go:318] OS: Linux
	I1017 20:12:14.466705  376518 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:12:14.466821  376518 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:12:14.466892  376518 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:12:14.466980  376518 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:12:14.467029  376518 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:12:14.467096  376518 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:12:14.467194  376518 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:12:14.467276  376518 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 20:12:14.467400  376518 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:12:14.467546  376518 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:12:14.467725  376518 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:12:14.467854  376518 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 17 20:11:31 no-preload-449580 crio[567]: time="2025-10-17T20:11:31.242723806Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:11:31 no-preload-449580 crio[567]: time="2025-10-17T20:11:31.246232438Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:11:31 no-preload-449580 crio[567]: time="2025-10-17T20:11:31.246263491Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.416647568Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9cf720ef-c43a-4cd3-85a8-cc86d079deb0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.419222003Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=15ac0d5a-7279-4b55-a265-f748cb5877e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.422528726Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr/dashboard-metrics-scraper" id=0aeea7a3-a8d8-426b-8a14-2662672ee08e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.42488075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.433026261Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.433516092Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.45797253Z" level=info msg="Created container caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr/dashboard-metrics-scraper" id=0aeea7a3-a8d8-426b-8a14-2662672ee08e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.458674374Z" level=info msg="Starting container: caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4" id=9e48aac6-ff7c-4fa2-bcd4-59dfb11031bc name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.460559974Z" level=info msg="Started container" PID=1754 containerID=caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr/dashboard-metrics-scraper id=9e48aac6-ff7c-4fa2-bcd4-59dfb11031bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=70d3b8cefc8fafede22d0b6a2db04634f0b3726af2caa390c639178bfbf24664
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.512244108Z" level=info msg="Removing container: 210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140" id=272c2702-5661-4235-bd70-7da5069a96bb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:11:42 no-preload-449580 crio[567]: time="2025-10-17T20:11:42.52491767Z" level=info msg="Removed container 210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr/dashboard-metrics-scraper" id=272c2702-5661-4235-bd70-7da5069a96bb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.535400149Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=92d58921-e83d-4c2f-a9c8-c003f430def4 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.536342094Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d312602d-0f1b-4fb5-9f92-51bc688c2c05 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.537376188Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8d64a9ba-a2da-4b1c-a5f1-8dd2ed491a94 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.537629211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.54188964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.542084088Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4505dd81934e60c7d8f82423cb0a76a23e335043c4c523bf92b15039b87faab3/merged/etc/passwd: no such file or directory"
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.542118973Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4505dd81934e60c7d8f82423cb0a76a23e335043c4c523bf92b15039b87faab3/merged/etc/group: no such file or directory"
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.542375648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.568891266Z" level=info msg="Created container 657fe0dbb4b0cba7157b7d8d6dd281cba239e2b86568e955ef7820a3d73b740f: kube-system/storage-provisioner/storage-provisioner" id=8d64a9ba-a2da-4b1c-a5f1-8dd2ed491a94 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.56972631Z" level=info msg="Starting container: 657fe0dbb4b0cba7157b7d8d6dd281cba239e2b86568e955ef7820a3d73b740f" id=9df277af-32c1-4318-8284-37c51afa38c5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:11:51 no-preload-449580 crio[567]: time="2025-10-17T20:11:51.572410909Z" level=info msg="Started container" PID=1768 containerID=657fe0dbb4b0cba7157b7d8d6dd281cba239e2b86568e955ef7820a3d73b740f description=kube-system/storage-provisioner/storage-provisioner id=9df277af-32c1-4318-8284-37c51afa38c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d10988d30cae04251eb02c520016bc2feef2279435feda27594089c2bb27bd61
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	657fe0dbb4b0c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   d10988d30cae0       storage-provisioner                          kube-system
	caf2282f6c9ba       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago      Exited              dashboard-metrics-scraper   2                   70d3b8cefc8fa       dashboard-metrics-scraper-6ffb444bf9-gqppr   kubernetes-dashboard
	1995d053f3c77       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   ab9f36dc6b195       kubernetes-dashboard-855c9754f9-dkzr6        kubernetes-dashboard
	ac287094e0df1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   a582898b61e4a       busybox                                      default
	e4cdebb7a5f1e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   58862e03014ba       coredns-66bc5c9577-p4n86                     kube-system
	fdcad2e90c8dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   d10988d30cae0       storage-provisioner                          kube-system
	b2d438515e445       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   46ffb5bc7e724       kindnet-9xg9h                                kube-system
	2065ed557a2ff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   6c467c64199a9       kube-proxy-m5g7f                             kube-system
	344d142d37fe5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   aafd7f702c118       kube-controller-manager-no-preload-449580    kube-system
	6cf770e38746c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   a848e57954886       kube-scheduler-no-preload-449580             kube-system
	09d3164355d52       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   31d862254d9c6       kube-apiserver-no-preload-449580             kube-system
	da4d6ced5b128       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   4330ac863b0b2       etcd-no-preload-449580                       kube-system
	
	
	==> coredns [e4cdebb7a5f1e03ca1d6840a7e5d790daca58249854250430492d1c216465dc2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36746 - 18685 "HINFO IN 2838055078267949360.8804048191215482115. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.071273849s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-449580
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-449580
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=no-preload-449580
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_10_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:10:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-449580
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:12:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:11:50 +0000   Fri, 17 Oct 2025 20:10:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:11:50 +0000   Fri, 17 Oct 2025 20:10:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:11:50 +0000   Fri, 17 Oct 2025 20:10:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:11:50 +0000   Fri, 17 Oct 2025 20:10:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-449580
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                95a628c4-6711-4ed7-bc23-3a2b6d436bf1
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-p4n86                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-449580                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-9xg9h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-449580              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-449580     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-m5g7f                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-449580              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gqppr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dkzr6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node no-preload-449580 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node no-preload-449580 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node no-preload-449580 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node no-preload-449580 event: Registered Node no-preload-449580 in Controller
	  Normal  NodeReady                96s                kubelet          Node no-preload-449580 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node no-preload-449580 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node no-preload-449580 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node no-preload-449580 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node no-preload-449580 event: Registered Node no-preload-449580 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [da4d6ced5b128794ebcf1eb3fba8085c8b428be8cc20e7b0cbbeb23351ceb4d4] <==
	{"level":"warn","ts":"2025-10-17T20:11:18.893408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.901392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.909998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.918518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.926006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.937351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.942246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.949309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.957063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.964962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.972331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.979974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.987675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:18.994391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.002290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.009467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.018454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.026646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.034939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.056234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.060584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.068152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.078304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:11:19.132435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50446","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:11:56.458311Z","caller":"traceutil/trace.go:172","msg":"trace[2109965745] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"136.693528ms","start":"2025-10-17T20:11:56.321592Z","end":"2025-10-17T20:11:56.458286Z","steps":["trace[2109965745] 'process raft request'  (duration: 78.619058ms)","trace[2109965745] 'compare'  (duration: 57.963023ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:12:15 up  1:54,  0 user,  load average: 3.37, 3.47, 2.36
	Linux no-preload-449580 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b2d438515e445e965a062ab1d3673eae9c240a5640ff6c902c5709be255d0b55] <==
	I1017 20:11:21.020972       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:11:21.021283       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 20:11:21.021442       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:11:21.021459       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:11:21.021481       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:11:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:11:21.222815       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:11:21.223336       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:11:21.223363       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:11:21.223501       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:11:21.620868       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:11:21.620928       1 metrics.go:72] Registering metrics
	I1017 20:11:21.621100       1 controller.go:711] "Syncing nftables rules"
	I1017 20:11:31.222812       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:11:31.222881       1 main.go:301] handling current node
	I1017 20:11:41.223031       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:11:41.223066       1 main.go:301] handling current node
	I1017 20:11:51.223013       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:11:51.223053       1 main.go:301] handling current node
	I1017 20:12:01.225941       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:12:01.225999       1 main.go:301] handling current node
	I1017 20:12:11.230831       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1017 20:12:11.230881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [09d3164355d524c8b81db0b45da6184b8608f2453c76034f04243ff5a2366382] <==
	I1017 20:11:19.640658       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:11:19.640665       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:11:19.640671       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:11:19.640712       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:11:19.640782       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:11:19.640521       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:11:19.640960       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:11:19.648336       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:11:19.648565       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:11:19.648637       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:11:19.650888       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1017 20:11:19.654802       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:11:19.659505       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 20:11:19.671501       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:11:19.892261       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:11:19.925268       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:11:19.946336       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:11:19.953600       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:11:19.963411       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:11:20.001103       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.245.242"}
	I1017 20:11:20.013269       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.202.59"}
	I1017 20:11:20.543474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:11:23.384228       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:11:23.432388       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:11:23.482799       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [344d142d37fe5e0cf83f172832d2f0380baafcfe5af95563d75af080c8f38c3c] <==
	I1017 20:11:22.928844       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:11:22.929139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:11:22.929201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:11:22.929212       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:11:22.929227       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:11:22.929405       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:11:22.929476       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:11:22.929594       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:11:22.929618       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:11:22.929717       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:11:22.929727       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:11:22.931358       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:11:22.931380       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:11:22.931473       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:11:22.931523       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:11:22.931534       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:11:22.931541       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:11:22.933583       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:11:22.933595       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:11:22.935808       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:11:22.938081       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:11:22.942399       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:11:22.943628       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:11:22.945787       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:11:22.996502       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2065ed557a2ff9e4311486d101858ee5b30b748b19f878da0d5158806d03a998] <==
	I1017 20:11:20.816065       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:11:20.882641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:11:20.983416       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:11:20.983456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 20:11:20.983591       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:11:21.003623       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:11:21.003685       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:11:21.009140       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:11:21.009965       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:11:21.010072       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:11:21.012322       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:11:21.012435       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:11:21.012356       1 config.go:200] "Starting service config controller"
	I1017 20:11:21.012525       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:11:21.012390       1 config.go:309] "Starting node config controller"
	I1017 20:11:21.012539       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:11:21.012729       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:11:21.012438       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:11:21.012797       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:11:21.112695       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:11:21.112704       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:11:21.113872       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6cf770e38746c4716bb308f95e151bdd97000b0a2142f8c26a0763b88060594f] <==
	I1017 20:11:18.902246       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:11:19.827163       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:11:19.827205       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:11:19.832625       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:11:19.832643       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:11:19.832653       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:11:19.832670       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:11:19.832662       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:11:19.832730       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:11:19.833135       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:11:19.833200       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:11:19.932864       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:11:19.932864       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:11:19.932876       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 17 20:11:23 no-preload-449580 kubelet[711]: I1017 20:11:23.696334     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh67t\" (UniqueName: \"kubernetes.io/projected/45832212-ca15-41a2-a9e8-9fc966fee3c2-kube-api-access-mh67t\") pod \"dashboard-metrics-scraper-6ffb444bf9-gqppr\" (UID: \"45832212-ca15-41a2-a9e8-9fc966fee3c2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr"
	Oct 17 20:11:23 no-preload-449580 kubelet[711]: I1017 20:11:23.696360     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/92cf2d50-aa83-4686-8f20-055646b5e2b8-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dkzr6\" (UID: \"92cf2d50-aa83-4686-8f20-055646b5e2b8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dkzr6"
	Oct 17 20:11:26 no-preload-449580 kubelet[711]: I1017 20:11:26.242828     711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 17 20:11:26 no-preload-449580 kubelet[711]: I1017 20:11:26.465496     711 scope.go:117] "RemoveContainer" containerID="56d81e40f46f24f50d2a02715f183704856dfaee453faff850e09400d5a45421"
	Oct 17 20:11:27 no-preload-449580 kubelet[711]: I1017 20:11:27.470419     711 scope.go:117] "RemoveContainer" containerID="56d81e40f46f24f50d2a02715f183704856dfaee453faff850e09400d5a45421"
	Oct 17 20:11:27 no-preload-449580 kubelet[711]: I1017 20:11:27.470576     711 scope.go:117] "RemoveContainer" containerID="210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140"
	Oct 17 20:11:27 no-preload-449580 kubelet[711]: E1017 20:11:27.470820     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:11:28 no-preload-449580 kubelet[711]: I1017 20:11:28.475120     711 scope.go:117] "RemoveContainer" containerID="210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140"
	Oct 17 20:11:28 no-preload-449580 kubelet[711]: E1017 20:11:28.475303     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:11:30 no-preload-449580 kubelet[711]: I1017 20:11:30.155296     711 scope.go:117] "RemoveContainer" containerID="210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140"
	Oct 17 20:11:30 no-preload-449580 kubelet[711]: E1017 20:11:30.155522     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:11:31 no-preload-449580 kubelet[711]: I1017 20:11:31.493041     711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dkzr6" podStartSLOduration=1.732910111 podStartE2EDuration="8.493019048s" podCreationTimestamp="2025-10-17 20:11:23 +0000 UTC" firstStartedPulling="2025-10-17 20:11:23.937391771 +0000 UTC m=+6.639130317" lastFinishedPulling="2025-10-17 20:11:30.697500707 +0000 UTC m=+13.399239254" observedRunningTime="2025-10-17 20:11:31.493010386 +0000 UTC m=+14.194748950" watchObservedRunningTime="2025-10-17 20:11:31.493019048 +0000 UTC m=+14.194757612"
	Oct 17 20:11:42 no-preload-449580 kubelet[711]: I1017 20:11:42.416174     711 scope.go:117] "RemoveContainer" containerID="210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140"
	Oct 17 20:11:42 no-preload-449580 kubelet[711]: I1017 20:11:42.510923     711 scope.go:117] "RemoveContainer" containerID="210bff1ee58099d1228780d3ffa3ae572b3718d5d988381ecbabe108968ee140"
	Oct 17 20:11:42 no-preload-449580 kubelet[711]: I1017 20:11:42.511161     711 scope.go:117] "RemoveContainer" containerID="caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4"
	Oct 17 20:11:42 no-preload-449580 kubelet[711]: E1017 20:11:42.511354     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:11:50 no-preload-449580 kubelet[711]: I1017 20:11:50.155296     711 scope.go:117] "RemoveContainer" containerID="caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4"
	Oct 17 20:11:50 no-preload-449580 kubelet[711]: E1017 20:11:50.155986     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:11:51 no-preload-449580 kubelet[711]: I1017 20:11:51.534969     711 scope.go:117] "RemoveContainer" containerID="fdcad2e90c8dcf59aada3333930294077886b20dc4ffa931ec9d1f20d86de19d"
	Oct 17 20:12:01 no-preload-449580 kubelet[711]: I1017 20:12:01.415885     711 scope.go:117] "RemoveContainer" containerID="caf2282f6c9babce176ab1e6dee770220985c0512257047ff3255003a1a892e4"
	Oct 17 20:12:01 no-preload-449580 kubelet[711]: E1017 20:12:01.416108     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gqppr_kubernetes-dashboard(45832212-ca15-41a2-a9e8-9fc966fee3c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gqppr" podUID="45832212-ca15-41a2-a9e8-9fc966fee3c2"
	Oct 17 20:12:10 no-preload-449580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:12:10 no-preload-449580 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:12:10 no-preload-449580 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 20:12:10 no-preload-449580 systemd[1]: kubelet.service: Consumed 1.706s CPU time.
	
	
	==> kubernetes-dashboard [1995d053f3c779ae7a5d37d3f2392fc388fb7eaf8a318c4c16bc4e63cc6cd09b] <==
	2025/10/17 20:11:30 Starting overwatch
	2025/10/17 20:11:30 Using namespace: kubernetes-dashboard
	2025/10/17 20:11:30 Using in-cluster config to connect to apiserver
	2025/10/17 20:11:30 Using secret token for csrf signing
	2025/10/17 20:11:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:11:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:11:30 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:11:30 Generating JWE encryption key
	2025/10/17 20:11:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:11:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:11:30 Initializing JWE encryption key from synchronized object
	2025/10/17 20:11:30 Creating in-cluster Sidecar client
	2025/10/17 20:11:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:11:30 Serving insecurely on HTTP port: 9090
	2025/10/17 20:12:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [657fe0dbb4b0cba7157b7d8d6dd281cba239e2b86568e955ef7820a3d73b740f] <==
	I1017 20:11:51.586370       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:11:51.595885       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:11:51.595937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:11:51.598364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:11:55.052996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:11:59.313182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:02.911813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:05.966325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:08.988907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:08.994822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:12:08.994975       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:12:08.995056       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0252561b-3175-478a-ae66-c43f417b884b", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-449580_4bb0b45a-bae7-4485-afaf-0842c5c38fde became leader
	I1017 20:12:08.995157       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-449580_4bb0b45a-bae7-4485-afaf-0842c5c38fde!
	W1017 20:12:08.998028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:09.003433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:12:09.095651       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-449580_4bb0b45a-bae7-4485-afaf-0842c5c38fde!
	W1017 20:12:11.006863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:11.012034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:13.016950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:13.023519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:15.029101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:15.035076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fdcad2e90c8dcf59aada3333930294077886b20dc4ffa931ec9d1f20d86de19d] <==
	I1017 20:11:20.786938       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:11:50.789140       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-449580 -n no-preload-449580
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-449580 -n no-preload-449580: exit status 2 (357.978026ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-449580 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-051488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-051488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (265.069764ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:12:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-051488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-051488 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-051488 describe deploy/metrics-server -n kube-system: exit status 1 (61.997227ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-051488 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-051488
helpers_test.go:243: (dbg) docker inspect embed-certs-051488:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9",
	        "Created": "2025-10-17T20:11:58.181534777Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 377342,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:11:58.227796529Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/hosts",
	        "LogPath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9-json.log",
	        "Name": "/embed-certs-051488",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-051488:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-051488",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9",
	                "LowerDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89/merged",
	                "UpperDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89/diff",
	                "WorkDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-051488",
	                "Source": "/var/lib/docker/volumes/embed-certs-051488/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-051488",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-051488",
	                "name.minikube.sigs.k8s.io": "embed-certs-051488",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e31c4f7f20b5d7d3c8394320c180689a0c80d1ffb2e5bfc1eed113e6be5621a5",
	            "SandboxKey": "/var/run/docker/netns/e31c4f7f20b5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-051488": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:1d:73:10:62:49",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f65906aaca8cabced2699549a6acf35f9aee8c707d1ca3ba4422f5bcdf4982c0",
	                    "EndpointID": "71cc0d1ffbe50838e3a2d316d531bb84102c726a5f4cb5dc23cbcf687e83e060",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-051488",
	                        "8985127eaa32"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-051488 -n embed-certs-051488
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-051488 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-051488 logs -n 25: (1.363689285s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p force-systemd-flag-599050                                                                                                                                                                                                                  │ force-systemd-flag-599050    │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-726816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p old-k8s-version-726816 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-726816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-449580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p no-preload-449580 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable dashboard -p no-preload-449580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ image   │ old-k8s-version-726816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ pause   │ -p old-k8s-version-726816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p cert-expiration-202048 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-202048       │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ image   │ no-preload-449580 image list --format=json                                                                                                                                                                                                    │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ pause   │ -p no-preload-449580 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ delete  │ -p cert-expiration-202048                                                                                                                                                                                                                     │ cert-expiration-202048       │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p disable-driver-mounts-270495                                                                                                                                                                                                               │ disable-driver-mounts-270495 │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ delete  │ -p no-preload-449580                                                                                                                                                                                                                          │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p no-preload-449580                                                                                                                                                                                                                          │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-051488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:12:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:12:21.725677  385034 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:12:21.726029  385034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:12:21.726045  385034 out.go:374] Setting ErrFile to fd 2...
	I1017 20:12:21.726052  385034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:12:21.726377  385034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:12:21.727105  385034 out.go:368] Setting JSON to false
	I1017 20:12:21.728959  385034 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6890,"bootTime":1760725052,"procs":415,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:12:21.729105  385034 start.go:141] virtualization: kvm guest
	I1017 20:12:21.731854  385034 out.go:179] * [newest-cni-051083] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:12:21.733920  385034 notify.go:220] Checking for updates...
	I1017 20:12:21.733951  385034 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:12:21.735576  385034 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:12:21.738834  385034 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:12:21.740596  385034 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:12:21.742094  385034 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:12:21.743607  385034 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:12:21.749732  385034 config.go:182] Loaded profile config "default-k8s-diff-port-563805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:21.749914  385034 config.go:182] Loaded profile config "embed-certs-051488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:21.750050  385034 config.go:182] Loaded profile config "kubernetes-upgrade-660693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:21.750264  385034 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:12:21.786553  385034 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:12:21.786758  385034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:12:21.880731  385034 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-17 20:12:21.859518834 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:12:21.880978  385034 docker.go:318] overlay module found
	I1017 20:12:21.885601  385034 out.go:179] * Using the docker driver based on user configuration
	I1017 20:12:21.887545  385034 start.go:305] selected driver: docker
	I1017 20:12:21.887574  385034 start.go:925] validating driver "docker" against <nil>
	I1017 20:12:21.887595  385034 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:12:21.888459  385034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:12:21.960435  385034 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-17 20:12:21.948858112 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:12:21.960689  385034 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1017 20:12:21.960730  385034 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1017 20:12:21.961012  385034 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 20:12:21.965431  385034 out.go:179] * Using Docker driver with root privileges
	I1017 20:12:21.966974  385034 cni.go:84] Creating CNI manager for ""
	I1017 20:12:21.967045  385034 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:21.967053  385034 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:12:21.967149  385034 start.go:349] cluster config:
	{Name:newest-cni-051083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-051083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:21.968824  385034 out.go:179] * Starting "newest-cni-051083" primary control-plane node in "newest-cni-051083" cluster
	I1017 20:12:21.970240  385034 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:12:21.971682  385034 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:12:21.973978  385034 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:21.974038  385034 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:12:21.974060  385034 cache.go:58] Caching tarball of preloaded images
	I1017 20:12:21.974078  385034 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:12:21.974175  385034 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:12:21.974191  385034 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:12:21.974329  385034 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/config.json ...
	I1017 20:12:21.974358  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/config.json: {Name:mk32842e78c30269f7c8b87106cd69b1a95516bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:21.998214  385034 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:12:21.998242  385034 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:12:21.998265  385034 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:12:21.998298  385034 start.go:360] acquireMachinesLock for newest-cni-051083: {Name:mk40bc92590455b2d7e0a97cfb06b266ec3e9a76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:12:21.998627  385034 start.go:364] duration metric: took 304.911µs to acquireMachinesLock for "newest-cni-051083"
	I1017 20:12:21.998661  385034 start.go:93] Provisioning new machine with config: &{Name:newest-cni-051083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-051083 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:12:21.998763  385034 start.go:125] createHost starting for "" (driver="docker")
	I1017 20:12:20.546875  376518 addons.go:514] duration metric: took 1.284205009s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1017 20:12:22.548093  376518 node_ready.go:57] node "embed-certs-051488" has "Ready":"False" status (will retry)
	I1017 20:12:20.602124  383050 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-563805 --name default-k8s-diff-port-563805 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-563805 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-563805 --network default-k8s-diff-port-563805 --ip 192.168.85.2 --volume default-k8s-diff-port-563805:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:12:21.500362  383050 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Running}}
	I1017 20:12:21.521963  383050 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Status}}
	I1017 20:12:21.544229  383050 cli_runner.go:164] Run: docker exec default-k8s-diff-port-563805 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:12:21.594016  383050 oci.go:144] the created container "default-k8s-diff-port-563805" has a running status.
	I1017 20:12:21.594063  383050 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa...
	I1017 20:12:22.132896  383050 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:12:22.162335  383050 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Status}}
	I1017 20:12:22.183501  383050 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:12:22.183522  383050 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-563805 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:12:22.236663  383050 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Status}}
	I1017 20:12:22.256624  383050 machine.go:93] provisionDockerMachine start ...
	I1017 20:12:22.256733  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:22.279600  383050 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:22.279920  383050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 20:12:22.279941  383050 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:12:22.420288  383050 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-563805
	
	I1017 20:12:22.420324  383050 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-563805"
	I1017 20:12:22.420418  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:22.441516  383050 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:22.441734  383050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 20:12:22.441771  383050 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-563805 && echo "default-k8s-diff-port-563805" | sudo tee /etc/hostname
	I1017 20:12:22.596611  383050 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-563805
	
	I1017 20:12:22.596704  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:22.618632  383050 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:22.618929  383050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 20:12:22.618960  383050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-563805' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-563805/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-563805' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:12:22.759647  383050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:12:22.759675  383050 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:12:22.759718  383050 ubuntu.go:190] setting up certificates
	I1017 20:12:22.759730  383050 provision.go:84] configureAuth start
	I1017 20:12:22.759808  383050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-563805
	I1017 20:12:22.780461  383050 provision.go:143] copyHostCerts
	I1017 20:12:22.780526  383050 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:12:22.780538  383050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:12:22.780613  383050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:12:22.780734  383050 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:12:22.780762  383050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:12:22.780806  383050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:12:22.780873  383050 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:12:22.780882  383050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:12:22.780905  383050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:12:22.780959  383050 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-563805 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-563805 localhost minikube]
	I1017 20:12:23.308421  383050 provision.go:177] copyRemoteCerts
	I1017 20:12:23.308481  383050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:12:23.308519  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:23.329340  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:23.430009  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:12:23.454240  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1017 20:12:23.474625  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:12:23.494563  383050 provision.go:87] duration metric: took 734.813132ms to configureAuth
	I1017 20:12:23.494598  383050 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:12:23.494810  383050 config.go:182] Loaded profile config "default-k8s-diff-port-563805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:23.494933  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:23.514501  383050 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:23.514721  383050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 20:12:23.514754  383050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:12:23.784789  383050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:12:23.784822  383050 machine.go:96] duration metric: took 1.528170485s to provisionDockerMachine
	I1017 20:12:23.784836  383050 client.go:171] duration metric: took 8.025728223s to LocalClient.Create
	I1017 20:12:23.784861  383050 start.go:167] duration metric: took 8.025806742s to libmachine.API.Create "default-k8s-diff-port-563805"
	I1017 20:12:23.784871  383050 start.go:293] postStartSetup for "default-k8s-diff-port-563805" (driver="docker")
	I1017 20:12:23.784886  383050 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:12:23.784975  383050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:12:23.785027  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:23.805143  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:23.906510  383050 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:12:23.910673  383050 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:12:23.910705  383050 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:12:23.910718  383050 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:12:23.910809  383050 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:12:23.910919  383050 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:12:23.911060  383050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:12:23.920453  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:23.943290  383050 start.go:296] duration metric: took 158.401148ms for postStartSetup
	I1017 20:12:23.943789  383050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-563805
	I1017 20:12:23.963958  383050 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/config.json ...
	I1017 20:12:23.964304  383050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:12:23.964365  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:23.983513  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:24.078260  383050 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:12:24.083595  383050 start.go:128] duration metric: took 8.327293608s to createHost
	I1017 20:12:24.083623  383050 start.go:83] releasing machines lock for "default-k8s-diff-port-563805", held for 8.327481429s
	I1017 20:12:24.083703  383050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-563805
	I1017 20:12:24.102809  383050 ssh_runner.go:195] Run: cat /version.json
	I1017 20:12:24.102874  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:24.102809  383050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:12:24.102986  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:24.123425  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:24.126521  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:24.273375  383050 ssh_runner.go:195] Run: systemctl --version
	I1017 20:12:24.280419  383050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:12:24.320296  383050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:12:24.325274  383050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:12:24.325353  383050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:12:24.356730  383050 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 20:12:24.356795  383050 start.go:495] detecting cgroup driver to use...
	I1017 20:12:24.356834  383050 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:12:24.356880  383050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:12:24.376110  383050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:12:24.390456  383050 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:12:24.390528  383050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:12:24.408764  383050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:12:24.427265  383050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:12:24.515281  383050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:12:24.613761  383050 docker.go:234] disabling docker service ...
	I1017 20:12:24.613822  383050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:12:24.634044  383050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:12:24.647845  383050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:12:24.740254  383050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:12:24.827137  383050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:12:24.840030  383050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:12:24.855464  383050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:12:24.855529  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.871114  383050 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:12:24.871196  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.881929  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.892248  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.914674  383050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:12:24.924769  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.936804  383050 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.969687  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:25.031085  383050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:12:25.039496  383050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:12:25.049051  383050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:25.138635  383050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:12:22.001657  385034 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:12:22.001925  385034 start.go:159] libmachine.API.Create for "newest-cni-051083" (driver="docker")
	I1017 20:12:22.001987  385034 client.go:168] LocalClient.Create starting
	I1017 20:12:22.002072  385034 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem
	I1017 20:12:22.002109  385034 main.go:141] libmachine: Decoding PEM data...
	I1017 20:12:22.002132  385034 main.go:141] libmachine: Parsing certificate...
	I1017 20:12:22.002196  385034 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem
	I1017 20:12:22.002220  385034 main.go:141] libmachine: Decoding PEM data...
	I1017 20:12:22.002235  385034 main.go:141] libmachine: Parsing certificate...
	I1017 20:12:22.002616  385034 cli_runner.go:164] Run: docker network inspect newest-cni-051083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:12:22.022402  385034 cli_runner.go:211] docker network inspect newest-cni-051083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:12:22.022472  385034 network_create.go:284] running [docker network inspect newest-cni-051083] to gather additional debugging logs...
	I1017 20:12:22.022492  385034 cli_runner.go:164] Run: docker network inspect newest-cni-051083
	W1017 20:12:22.041458  385034 cli_runner.go:211] docker network inspect newest-cni-051083 returned with exit code 1
	I1017 20:12:22.041497  385034 network_create.go:287] error running [docker network inspect newest-cni-051083]: docker network inspect newest-cni-051083: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-051083 not found
	I1017 20:12:22.041513  385034 network_create.go:289] output of [docker network inspect newest-cni-051083]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-051083 not found
	
	** /stderr **
	I1017 20:12:22.041603  385034 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:12:22.062104  385034 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d34a70da1174 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:b8:c9:c3:2e:b0} reservation:<nil>}
	I1017 20:12:22.062669  385034 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-07edace58173 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f3:28:2c:52:ce} reservation:<nil>}
	I1017 20:12:22.063211  385034 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a478249e8fe7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:51:65:8d:cb:60} reservation:<nil>}
	I1017 20:12:22.063791  385034 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7ed8ef1bc0a4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:6a:98:d7:e8:28} reservation:<nil>}
	I1017 20:12:22.064153  385034 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9a4aaba57340 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:16:30:99:20:8d:be} reservation:<nil>}
	I1017 20:12:22.064852  385034 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f65906aaca8c IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ba:86:9c:15:01:28} reservation:<nil>}
	I1017 20:12:22.065604  385034 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fbf0b0}
	I1017 20:12:22.065626  385034 network_create.go:124] attempt to create docker network newest-cni-051083 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1017 20:12:22.065690  385034 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-051083 newest-cni-051083
	I1017 20:12:22.128861  385034 network_create.go:108] docker network newest-cni-051083 192.168.103.0/24 created
	I1017 20:12:22.128902  385034 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-051083" container
	I1017 20:12:22.128977  385034 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:12:22.149037  385034 cli_runner.go:164] Run: docker volume create newest-cni-051083 --label name.minikube.sigs.k8s.io=newest-cni-051083 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:12:22.170567  385034 oci.go:103] Successfully created a docker volume newest-cni-051083
	I1017 20:12:22.170652  385034 cli_runner.go:164] Run: docker run --rm --name newest-cni-051083-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-051083 --entrypoint /usr/bin/test -v newest-cni-051083:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:12:22.613697  385034 oci.go:107] Successfully prepared a docker volume newest-cni-051083
	I1017 20:12:22.613779  385034 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:22.613821  385034 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:12:22.613900  385034 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-051083:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 20:12:27.263209  383050 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.124516507s)
	I1017 20:12:27.263248  383050 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:12:27.263304  383050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:12:27.268678  383050 start.go:563] Will wait 60s for crictl version
	I1017 20:12:27.268766  383050 ssh_runner.go:195] Run: which crictl
	I1017 20:12:27.273028  383050 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:12:27.302815  383050 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:12:27.302907  383050 ssh_runner.go:195] Run: crio --version
	I1017 20:12:27.336248  383050 ssh_runner.go:195] Run: crio --version
	I1017 20:12:27.368686  383050 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:12:25.188593  344862 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.066262578s)
	W1017 20:12:25.188711  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1017 20:12:25.188759  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:12:25.188789  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:25.223473  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:12:25.223509  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:25.255924  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:25.255958  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:25.311394  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:12:25.311435  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:25.340102  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:12:25.340136  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:25.368999  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:25.369030  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:25.402905  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:25.402940  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:25.423117  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:25.423159  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1017 20:12:24.548718  376518 node_ready.go:57] node "embed-certs-051488" has "Ready":"False" status (will retry)
	W1017 20:12:27.047437  376518 node_ready.go:57] node "embed-certs-051488" has "Ready":"False" status (will retry)
	I1017 20:12:27.370192  383050 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-563805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:12:27.394330  383050 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 20:12:27.399364  383050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:12:27.420924  383050 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-563805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-563805 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:12:27.421032  383050 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:27.421073  383050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:27.459232  383050 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:27.459256  383050 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:12:27.459303  383050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:27.489791  383050 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:27.489820  383050 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:12:27.489831  383050 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1017 20:12:27.489935  383050 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-563805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-563805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:12:27.490021  383050 ssh_runner.go:195] Run: crio config
	I1017 20:12:27.542209  383050 cni.go:84] Creating CNI manager for ""
	I1017 20:12:27.542243  383050 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:27.542263  383050 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:12:27.542300  383050 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-563805 NodeName:default-k8s-diff-port-563805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:12:27.542478  383050 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-563805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:12:27.542552  383050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:12:27.553709  383050 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:12:27.553787  383050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:12:27.563884  383050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1017 20:12:27.581408  383050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:12:27.600811  383050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1017 20:12:27.618137  383050 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:12:27.623043  383050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:12:27.636326  383050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:27.730914  383050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:12:27.757823  383050 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805 for IP: 192.168.85.2
	I1017 20:12:27.757849  383050 certs.go:195] generating shared ca certs ...
	I1017 20:12:27.757870  383050 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:27.758055  383050 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:12:27.758128  383050 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:12:27.758143  383050 certs.go:257] generating profile certs ...
	I1017 20:12:27.758218  383050 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.key
	I1017 20:12:27.758247  383050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.crt with IP's: []
	I1017 20:12:28.127258  383050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.crt ...
	I1017 20:12:28.127291  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.crt: {Name:mkdb4908d85bb0fbf42b54fea70a53f69c796a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.127460  383050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.key ...
	I1017 20:12:28.127474  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.key: {Name:mkf77a964dd11655a181747805acc9c537a9aba5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.127551  383050 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key.62088183
	I1017 20:12:28.127568  383050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt.62088183 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1017 20:12:28.210803  383050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt.62088183 ...
	I1017 20:12:28.210839  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt.62088183: {Name:mkeb4acf67adeb3a65d8f73c6ddca86fe7b0357f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.211024  383050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key.62088183 ...
	I1017 20:12:28.211049  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key.62088183: {Name:mk893006e82030ff0ae3f0128f0ea78a25344473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.211164  383050 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt.62088183 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt
	I1017 20:12:28.211293  383050 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key.62088183 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key
	I1017 20:12:28.211394  383050 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.key
	I1017 20:12:28.211429  383050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.crt with IP's: []
	I1017 20:12:28.272145  383050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.crt ...
	I1017 20:12:28.272175  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.crt: {Name:mkb68cb9add86d0869ff386211795c62543b4306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.272362  383050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.key ...
	I1017 20:12:28.272380  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.key: {Name:mk918e10ecf39d170a09ddecccfa035d26ac76ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.272605  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:12:28.272655  383050 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:12:28.272672  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:12:28.272708  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:12:28.272750  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:12:28.272782  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:12:28.272834  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:28.273588  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:12:28.294125  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:12:28.315974  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:12:28.337732  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:12:28.359201  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:12:28.380227  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:12:28.404893  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:12:28.427352  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:12:28.447015  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:12:28.468149  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:12:28.488289  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:12:28.508105  383050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:12:28.521912  383050 ssh_runner.go:195] Run: openssl version
	I1017 20:12:28.528546  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:12:28.538162  383050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:28.542620  383050 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:28.542677  383050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:28.580591  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:12:28.590293  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:12:28.599953  383050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:12:28.604290  383050 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:12:28.604343  383050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:12:28.641557  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:12:28.652033  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:12:28.662460  383050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:12:28.666958  383050 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:12:28.667022  383050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:12:28.704835  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:12:28.714396  383050 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:12:28.718390  383050 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:12:28.718444  383050 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-563805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-563805 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:28.718518  383050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:12:28.718581  383050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:12:28.749291  383050 cri.go:89] found id: ""
	I1017 20:12:28.749369  383050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:12:28.758528  383050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:12:28.766885  383050 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:12:28.766958  383050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:12:28.776822  383050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:12:28.776843  383050 kubeadm.go:157] found existing configuration files:
	
	I1017 20:12:28.776888  383050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1017 20:12:28.786121  383050 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:12:28.786191  383050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:12:28.794360  383050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1017 20:12:28.803982  383050 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:12:28.804043  383050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:12:28.812651  383050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1017 20:12:28.821854  383050 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:12:28.821919  383050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:12:28.830552  383050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1017 20:12:28.838790  383050 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:12:28.838858  383050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:12:28.847029  383050 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:12:28.913309  383050 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 20:12:28.982349  383050 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:12:27.169311  385034 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-051083:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.555366003s)
	I1017 20:12:27.169351  385034 kic.go:203] duration metric: took 4.555527609s to extract preloaded images to volume ...
	W1017 20:12:27.169455  385034 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 20:12:27.169496  385034 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 20:12:27.169533  385034 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:12:27.236703  385034 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-051083 --name newest-cni-051083 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-051083 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-051083 --network newest-cni-051083 --ip 192.168.103.2 --volume newest-cni-051083:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:12:27.553787  385034 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Running}}
	I1017 20:12:27.574846  385034 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Status}}
	I1017 20:12:27.595939  385034 cli_runner.go:164] Run: docker exec newest-cni-051083 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:12:27.649915  385034 oci.go:144] the created container "newest-cni-051083" has a running status.
	I1017 20:12:27.649947  385034 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa...
	I1017 20:12:28.284930  385034 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:12:28.313914  385034 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Status}}
	I1017 20:12:28.333060  385034 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:12:28.333080  385034 kic_runner.go:114] Args: [docker exec --privileged newest-cni-051083 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:12:28.385255  385034 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Status}}
	I1017 20:12:28.407256  385034 machine.go:93] provisionDockerMachine start ...
	I1017 20:12:28.407376  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:28.427479  385034 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:28.427824  385034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 20:12:28.427848  385034 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:12:28.563106  385034 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-051083
	
	I1017 20:12:28.563132  385034 ubuntu.go:182] provisioning hostname "newest-cni-051083"
	I1017 20:12:28.563202  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:28.582586  385034 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:28.582855  385034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 20:12:28.582871  385034 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-051083 && echo "newest-cni-051083" | sudo tee /etc/hostname
	I1017 20:12:28.732754  385034 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-051083
	
	I1017 20:12:28.732847  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:28.753126  385034 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:28.753395  385034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 20:12:28.753427  385034 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-051083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-051083/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-051083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:12:28.893011  385034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:12:28.893128  385034 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:12:28.893178  385034 ubuntu.go:190] setting up certificates
	I1017 20:12:28.893194  385034 provision.go:84] configureAuth start
	I1017 20:12:28.893265  385034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-051083
	I1017 20:12:28.912838  385034 provision.go:143] copyHostCerts
	I1017 20:12:28.912909  385034 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:12:28.912925  385034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:12:28.913000  385034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:12:28.913116  385034 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:12:28.913129  385034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:12:28.913166  385034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:12:28.913237  385034 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:12:28.913246  385034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:12:28.913279  385034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:12:28.913392  385034 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.newest-cni-051083 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-051083]
	I1017 20:12:29.209061  385034 provision.go:177] copyRemoteCerts
	I1017 20:12:29.209121  385034 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:12:29.209158  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:29.228992  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:29.327595  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:12:29.348800  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:12:29.368796  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:12:29.389209  385034 provision.go:87] duration metric: took 495.995104ms to configureAuth
	I1017 20:12:29.389238  385034 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:12:29.389448  385034 config.go:182] Loaded profile config "newest-cni-051083": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:29.389590  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:29.412619  385034 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:29.412928  385034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 20:12:29.412951  385034 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:12:29.666672  385034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:12:29.666699  385034 machine.go:96] duration metric: took 1.259413271s to provisionDockerMachine
	I1017 20:12:29.666708  385034 client.go:171] duration metric: took 7.664711687s to LocalClient.Create
	I1017 20:12:29.666726  385034 start.go:167] duration metric: took 7.664803946s to libmachine.API.Create "newest-cni-051083"
	I1017 20:12:29.666733  385034 start.go:293] postStartSetup for "newest-cni-051083" (driver="docker")
	I1017 20:12:29.666758  385034 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:12:29.666821  385034 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:12:29.666862  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:29.686112  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:29.787827  385034 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:12:29.791786  385034 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:12:29.791813  385034 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:12:29.791825  385034 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:12:29.791887  385034 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:12:29.792048  385034 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:12:29.792174  385034 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:12:29.800658  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:29.822552  385034 start.go:296] duration metric: took 155.802523ms for postStartSetup
	I1017 20:12:29.822998  385034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-051083
	I1017 20:12:29.842065  385034 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/config.json ...
	I1017 20:12:29.842426  385034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:12:29.842473  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:29.861604  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:29.956344  385034 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:12:29.961292  385034 start.go:128] duration metric: took 7.962509523s to createHost
	I1017 20:12:29.961321  385034 start.go:83] releasing machines lock for "newest-cni-051083", held for 7.962672012s
	I1017 20:12:29.961394  385034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-051083
	I1017 20:12:29.979784  385034 ssh_runner.go:195] Run: cat /version.json
	I1017 20:12:29.979845  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:29.979790  385034 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:12:29.979970  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:30.000190  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:30.000523  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:30.154335  385034 ssh_runner.go:195] Run: systemctl --version
	I1017 20:12:30.161686  385034 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:12:30.199115  385034 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:12:30.204135  385034 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:12:30.204208  385034 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:12:30.234013  385034 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 20:12:30.234043  385034 start.go:495] detecting cgroup driver to use...
	I1017 20:12:30.234083  385034 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:12:30.234136  385034 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:12:30.251847  385034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:12:30.266325  385034 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:12:30.266383  385034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:12:30.287093  385034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:12:30.306571  385034 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:12:30.393115  385034 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:12:30.483007  385034 docker.go:234] disabling docker service ...
	I1017 20:12:30.483095  385034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:12:30.503350  385034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:12:30.516907  385034 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:12:30.604508  385034 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:12:30.688801  385034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:12:30.703249  385034 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:12:30.719088  385034 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:12:30.719153  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.730628  385034 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:12:30.730700  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.741040  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.751209  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.761361  385034 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:12:30.770688  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.781029  385034 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.795877  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.806752  385034 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:12:30.814868  385034 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:12:30.822981  385034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:30.903136  385034 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:12:31.021412  385034 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:12:31.021482  385034 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:12:31.025846  385034 start.go:563] Will wait 60s for crictl version
	I1017 20:12:31.025913  385034 ssh_runner.go:195] Run: which crictl
	I1017 20:12:31.030144  385034 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:12:31.058441  385034 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:12:31.058540  385034 ssh_runner.go:195] Run: crio --version
	I1017 20:12:31.088254  385034 ssh_runner.go:195] Run: crio --version
	I1017 20:12:31.122238  385034 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:12:31.123563  385034 cli_runner.go:164] Run: docker network inspect newest-cni-051083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:12:31.141767  385034 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1017 20:12:31.146295  385034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:12:31.159616  385034 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1017 20:12:31.161074  385034 kubeadm.go:883] updating cluster {Name:newest-cni-051083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-051083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:12:31.161232  385034 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:31.161336  385034 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:31.197112  385034 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:31.197140  385034 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:12:31.197204  385034 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:31.227269  385034 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:31.227295  385034 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:12:31.227302  385034 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1017 20:12:31.227388  385034 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-051083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-051083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:12:31.227465  385034 ssh_runner.go:195] Run: crio config
	I1017 20:12:31.273251  385034 cni.go:84] Creating CNI manager for ""
	I1017 20:12:31.273274  385034 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:31.273298  385034 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1017 20:12:31.273336  385034 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-051083 NodeName:newest-cni-051083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:12:31.273489  385034 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-051083"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:12:31.273567  385034 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:12:31.282502  385034 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:12:31.282575  385034 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:12:31.290669  385034 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 20:12:31.304514  385034 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:12:31.321025  385034 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1017 20:12:31.334506  385034 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:12:31.338622  385034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:12:31.350175  385034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:31.442920  385034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:12:31.466642  385034 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083 for IP: 192.168.103.2
	I1017 20:12:31.466674  385034 certs.go:195] generating shared ca certs ...
	I1017 20:12:31.466691  385034 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:31.466860  385034 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:12:31.466899  385034 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:12:31.466907  385034 certs.go:257] generating profile certs ...
	I1017 20:12:31.466978  385034 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.key
	I1017 20:12:31.467004  385034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.crt with IP's: []
	I1017 20:12:27.975308  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1017 20:12:29.048035  376518 node_ready.go:57] node "embed-certs-051488" has "Ready":"False" status (will retry)
	W1017 20:12:31.547841  376518 node_ready.go:57] node "embed-certs-051488" has "Ready":"False" status (will retry)
	I1017 20:12:32.047477  376518 node_ready.go:49] node "embed-certs-051488" is "Ready"
	I1017 20:12:32.047508  376518 node_ready.go:38] duration metric: took 11.503201874s for node "embed-certs-051488" to be "Ready" ...
	I1017 20:12:32.047523  376518 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:12:32.047580  376518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:12:32.062138  376518 api_server.go:72] duration metric: took 12.799508344s to wait for apiserver process to appear ...
	I1017 20:12:32.062165  376518 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:12:32.062196  376518 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1017 20:12:32.067509  376518 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1017 20:12:32.068553  376518 api_server.go:141] control plane version: v1.34.1
	I1017 20:12:32.068579  376518 api_server.go:131] duration metric: took 6.405116ms to wait for apiserver health ...
	I1017 20:12:32.068589  376518 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:12:32.072828  376518 system_pods.go:59] 8 kube-system pods found
	I1017 20:12:32.072911  376518 system_pods.go:61] "coredns-66bc5c9577-gq5dd" [4c8aa324-2af3-4de9-9e87-e0d7c2049d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:12:32.072921  376518 system_pods.go:61] "etcd-embed-certs-051488" [eaf5eefb-016e-480b-95f0-987e5398e403] Running
	I1017 20:12:32.072929  376518 system_pods.go:61] "kindnet-rzd8h" [2175403d-fd55-45fd-8a79-62390167379e] Running
	I1017 20:12:32.072934  376518 system_pods.go:61] "kube-apiserver-embed-certs-051488" [81cb7a64-9a96-49cb-87d6-0ca6b2a06ff4] Running
	I1017 20:12:32.072940  376518 system_pods.go:61] "kube-controller-manager-embed-certs-051488" [c7f3bc8e-83ba-4289-a6bf-f9e34608a227] Running
	I1017 20:12:32.072945  376518 system_pods.go:61] "kube-proxy-95wmw" [51a10ca2-e69d-428e-9703-fdaa7b794cda] Running
	I1017 20:12:32.072961  376518 system_pods.go:61] "kube-scheduler-embed-certs-051488" [4d83e37f-3923-41dc-9dd3-c24adfbddf62] Running
	I1017 20:12:32.072969  376518 system_pods.go:61] "storage-provisioner" [4b66cc71-3175-46bd-93d2-28303821da56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:12:32.072983  376518 system_pods.go:74] duration metric: took 4.381178ms to wait for pod list to return data ...
	I1017 20:12:32.072994  376518 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:12:32.075977  376518 default_sa.go:45] found service account: "default"
	I1017 20:12:32.076045  376518 default_sa.go:55] duration metric: took 3.038574ms for default service account to be created ...
	I1017 20:12:32.076071  376518 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:12:32.078929  376518 system_pods.go:86] 8 kube-system pods found
	I1017 20:12:32.078963  376518 system_pods.go:89] "coredns-66bc5c9577-gq5dd" [4c8aa324-2af3-4de9-9e87-e0d7c2049d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:12:32.078972  376518 system_pods.go:89] "etcd-embed-certs-051488" [eaf5eefb-016e-480b-95f0-987e5398e403] Running
	I1017 20:12:32.078979  376518 system_pods.go:89] "kindnet-rzd8h" [2175403d-fd55-45fd-8a79-62390167379e] Running
	I1017 20:12:32.078985  376518 system_pods.go:89] "kube-apiserver-embed-certs-051488" [81cb7a64-9a96-49cb-87d6-0ca6b2a06ff4] Running
	I1017 20:12:32.078995  376518 system_pods.go:89] "kube-controller-manager-embed-certs-051488" [c7f3bc8e-83ba-4289-a6bf-f9e34608a227] Running
	I1017 20:12:32.079000  376518 system_pods.go:89] "kube-proxy-95wmw" [51a10ca2-e69d-428e-9703-fdaa7b794cda] Running
	I1017 20:12:32.079006  376518 system_pods.go:89] "kube-scheduler-embed-certs-051488" [4d83e37f-3923-41dc-9dd3-c24adfbddf62] Running
	I1017 20:12:32.079014  376518 system_pods.go:89] "storage-provisioner" [4b66cc71-3175-46bd-93d2-28303821da56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:12:32.079041  376518 retry.go:31] will retry after 280.81201ms: missing components: kube-dns
	I1017 20:12:32.364977  376518 system_pods.go:86] 8 kube-system pods found
	I1017 20:12:32.365022  376518 system_pods.go:89] "coredns-66bc5c9577-gq5dd" [4c8aa324-2af3-4de9-9e87-e0d7c2049d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:12:32.365031  376518 system_pods.go:89] "etcd-embed-certs-051488" [eaf5eefb-016e-480b-95f0-987e5398e403] Running
	I1017 20:12:32.365038  376518 system_pods.go:89] "kindnet-rzd8h" [2175403d-fd55-45fd-8a79-62390167379e] Running
	I1017 20:12:32.365044  376518 system_pods.go:89] "kube-apiserver-embed-certs-051488" [81cb7a64-9a96-49cb-87d6-0ca6b2a06ff4] Running
	I1017 20:12:32.365055  376518 system_pods.go:89] "kube-controller-manager-embed-certs-051488" [c7f3bc8e-83ba-4289-a6bf-f9e34608a227] Running
	I1017 20:12:32.365060  376518 system_pods.go:89] "kube-proxy-95wmw" [51a10ca2-e69d-428e-9703-fdaa7b794cda] Running
	I1017 20:12:32.365068  376518 system_pods.go:89] "kube-scheduler-embed-certs-051488" [4d83e37f-3923-41dc-9dd3-c24adfbddf62] Running
	I1017 20:12:32.365078  376518 system_pods.go:89] "storage-provisioner" [4b66cc71-3175-46bd-93d2-28303821da56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:12:32.365096  376518 retry.go:31] will retry after 251.191698ms: missing components: kube-dns
	I1017 20:12:32.619941  376518 system_pods.go:86] 8 kube-system pods found
	I1017 20:12:32.619975  376518 system_pods.go:89] "coredns-66bc5c9577-gq5dd" [4c8aa324-2af3-4de9-9e87-e0d7c2049d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:12:32.619982  376518 system_pods.go:89] "etcd-embed-certs-051488" [eaf5eefb-016e-480b-95f0-987e5398e403] Running
	I1017 20:12:32.619988  376518 system_pods.go:89] "kindnet-rzd8h" [2175403d-fd55-45fd-8a79-62390167379e] Running
	I1017 20:12:32.619992  376518 system_pods.go:89] "kube-apiserver-embed-certs-051488" [81cb7a64-9a96-49cb-87d6-0ca6b2a06ff4] Running
	I1017 20:12:32.619997  376518 system_pods.go:89] "kube-controller-manager-embed-certs-051488" [c7f3bc8e-83ba-4289-a6bf-f9e34608a227] Running
	I1017 20:12:32.620002  376518 system_pods.go:89] "kube-proxy-95wmw" [51a10ca2-e69d-428e-9703-fdaa7b794cda] Running
	I1017 20:12:32.620006  376518 system_pods.go:89] "kube-scheduler-embed-certs-051488" [4d83e37f-3923-41dc-9dd3-c24adfbddf62] Running
	I1017 20:12:32.620013  376518 system_pods.go:89] "storage-provisioner" [4b66cc71-3175-46bd-93d2-28303821da56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:12:32.620038  376518 retry.go:31] will retry after 460.199391ms: missing components: kube-dns
	I1017 20:12:33.085972  376518 system_pods.go:86] 8 kube-system pods found
	I1017 20:12:33.086010  376518 system_pods.go:89] "coredns-66bc5c9577-gq5dd" [4c8aa324-2af3-4de9-9e87-e0d7c2049d50] Running
	I1017 20:12:33.086018  376518 system_pods.go:89] "etcd-embed-certs-051488" [eaf5eefb-016e-480b-95f0-987e5398e403] Running
	I1017 20:12:33.086024  376518 system_pods.go:89] "kindnet-rzd8h" [2175403d-fd55-45fd-8a79-62390167379e] Running
	I1017 20:12:33.086038  376518 system_pods.go:89] "kube-apiserver-embed-certs-051488" [81cb7a64-9a96-49cb-87d6-0ca6b2a06ff4] Running
	I1017 20:12:33.086045  376518 system_pods.go:89] "kube-controller-manager-embed-certs-051488" [c7f3bc8e-83ba-4289-a6bf-f9e34608a227] Running
	I1017 20:12:33.086051  376518 system_pods.go:89] "kube-proxy-95wmw" [51a10ca2-e69d-428e-9703-fdaa7b794cda] Running
	I1017 20:12:33.086059  376518 system_pods.go:89] "kube-scheduler-embed-certs-051488" [4d83e37f-3923-41dc-9dd3-c24adfbddf62] Running
	I1017 20:12:33.086064  376518 system_pods.go:89] "storage-provisioner" [4b66cc71-3175-46bd-93d2-28303821da56] Running
	I1017 20:12:33.086076  376518 system_pods.go:126] duration metric: took 1.009996656s to wait for k8s-apps to be running ...
	I1017 20:12:33.086086  376518 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:12:33.086146  376518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:12:33.104327  376518 system_svc.go:56] duration metric: took 18.230673ms WaitForService to wait for kubelet
	I1017 20:12:33.104360  376518 kubeadm.go:586] duration metric: took 13.841736715s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:12:33.104385  376518 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:12:33.108252  376518 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:12:33.108280  376518 node_conditions.go:123] node cpu capacity is 8
	I1017 20:12:33.108298  376518 node_conditions.go:105] duration metric: took 3.907208ms to run NodePressure ...
	I1017 20:12:33.108312  376518 start.go:241] waiting for startup goroutines ...
	I1017 20:12:33.108322  376518 start.go:246] waiting for cluster config update ...
	I1017 20:12:33.108335  376518 start.go:255] writing updated cluster config ...
	I1017 20:12:33.108629  376518 ssh_runner.go:195] Run: rm -f paused
	I1017 20:12:33.113094  376518 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:12:33.186119  376518 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gq5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.191579  376518 pod_ready.go:94] pod "coredns-66bc5c9577-gq5dd" is "Ready"
	I1017 20:12:33.191610  376518 pod_ready.go:86] duration metric: took 5.45888ms for pod "coredns-66bc5c9577-gq5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.194326  376518 pod_ready.go:83] waiting for pod "etcd-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.199109  376518 pod_ready.go:94] pod "etcd-embed-certs-051488" is "Ready"
	I1017 20:12:33.199132  376518 pod_ready.go:86] duration metric: took 4.779572ms for pod "etcd-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.201635  376518 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.206347  376518 pod_ready.go:94] pod "kube-apiserver-embed-certs-051488" is "Ready"
	I1017 20:12:33.206376  376518 pod_ready.go:86] duration metric: took 4.710188ms for pod "kube-apiserver-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.208800  376518 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.518162  376518 pod_ready.go:94] pod "kube-controller-manager-embed-certs-051488" is "Ready"
	I1017 20:12:33.518195  376518 pod_ready.go:86] duration metric: took 309.369744ms for pod "kube-controller-manager-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.717515  376518 pod_ready.go:83] waiting for pod "kube-proxy-95wmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:34.117998  376518 pod_ready.go:94] pod "kube-proxy-95wmw" is "Ready"
	I1017 20:12:34.118029  376518 pod_ready.go:86] duration metric: took 400.485054ms for pod "kube-proxy-95wmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:34.319260  376518 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:34.717708  376518 pod_ready.go:94] pod "kube-scheduler-embed-certs-051488" is "Ready"
	I1017 20:12:34.717767  376518 pod_ready.go:86] duration metric: took 398.475006ms for pod "kube-scheduler-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:34.717782  376518 pod_ready.go:40] duration metric: took 1.604651954s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:12:34.781701  376518 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:12:34.783860  376518 out.go:179] * Done! kubectl is now configured to use "embed-certs-051488" cluster and "default" namespace by default
	I1017 20:12:32.340835  385034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.crt ...
	I1017 20:12:32.340865  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.crt: {Name:mka29cf8226e58f8e6b43f5640866adcad75ebd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.341088  385034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.key ...
	I1017 20:12:32.341105  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.key: {Name:mk8a4693b49a6259eb801439094ac4a838948385 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.341236  385034 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key.17fdb1e4
	I1017 20:12:32.341260  385034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt.17fdb1e4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1017 20:12:32.507233  385034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt.17fdb1e4 ...
	I1017 20:12:32.507264  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt.17fdb1e4: {Name:mk14e5666de92b68a63d8c6419b53f312b0e5045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.507480  385034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key.17fdb1e4 ...
	I1017 20:12:32.507500  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key.17fdb1e4: {Name:mk4d02f46380b38214d5651d71bcd2f66de8b6f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.507642  385034 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt.17fdb1e4 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt
	I1017 20:12:32.507765  385034 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key.17fdb1e4 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key
	I1017 20:12:32.507856  385034 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.key
	I1017 20:12:32.507877  385034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.crt with IP's: []
	I1017 20:12:32.626565  385034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.crt ...
	I1017 20:12:32.626599  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.crt: {Name:mk042a89c7506fa7e4a67833571f34e3c5e2d196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.626831  385034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.key ...
	I1017 20:12:32.626856  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.key: {Name:mkb97a3c9230d4ffed1653eef5f7638dfd900392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.627090  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:12:32.627142  385034 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:12:32.627156  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:12:32.627193  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:12:32.627238  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:12:32.627272  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:12:32.627327  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:32.627960  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:12:32.648057  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:12:32.667677  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:12:32.687134  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:12:32.707289  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 20:12:32.726519  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:12:32.745514  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:12:32.765897  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:12:32.785647  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:12:32.806857  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:12:32.831509  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:12:32.855874  385034 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:12:32.872706  385034 ssh_runner.go:195] Run: openssl version
	I1017 20:12:32.879303  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:12:32.889953  385034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:12:32.894531  385034 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:12:32.894605  385034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:12:32.933119  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:12:32.942857  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:12:32.951958  385034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:12:32.955942  385034 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:12:32.956015  385034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:12:32.991886  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:12:33.003592  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:12:33.013686  385034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:33.018130  385034 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:33.018226  385034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:33.055623  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:12:33.066672  385034 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:12:33.071496  385034 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:12:33.071565  385034 kubeadm.go:400] StartCluster: {Name:newest-cni-051083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-051083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:33.071652  385034 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:12:33.071712  385034 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:12:33.106040  385034 cri.go:89] found id: ""
	I1017 20:12:33.106108  385034 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:12:33.116767  385034 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:12:33.126409  385034 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:12:33.126485  385034 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:12:33.135879  385034 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:12:33.135906  385034 kubeadm.go:157] found existing configuration files:
	
	I1017 20:12:33.135959  385034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:12:33.148886  385034 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:12:33.148963  385034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:12:33.159983  385034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:12:33.168549  385034 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:12:33.168611  385034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:12:33.177655  385034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:12:33.188138  385034 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:12:33.188199  385034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:12:33.198426  385034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:12:33.208553  385034 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:12:33.208619  385034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:12:33.219205  385034 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:12:33.293460  385034 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 20:12:33.361102  385034 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:12:32.975954  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1017 20:12:32.976022  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:12:32.976106  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:12:33.005970  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:33.005998  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:33.006003  344862 cri.go:89] found id: ""
	I1017 20:12:33.006040  344862 logs.go:282] 2 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:12:33.006123  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:33.010252  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:33.014502  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:12:33.014568  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:12:33.045586  344862 cri.go:89] found id: ""
	I1017 20:12:33.045618  344862 logs.go:282] 0 containers: []
	W1017 20:12:33.045630  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:12:33.045639  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:12:33.045700  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:12:33.076426  344862 cri.go:89] found id: ""
	I1017 20:12:33.076452  344862 logs.go:282] 0 containers: []
	W1017 20:12:33.076460  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:12:33.076466  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:12:33.076514  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:12:33.112313  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:33.112339  344862 cri.go:89] found id: ""
	I1017 20:12:33.112350  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:12:33.112407  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:33.117428  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:12:33.117494  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:12:33.155639  344862 cri.go:89] found id: ""
	I1017 20:12:33.155664  344862 logs.go:282] 0 containers: []
	W1017 20:12:33.155674  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:12:33.155682  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:12:33.155734  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:12:33.186649  344862 cri.go:89] found id: "a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:33.186671  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:33.186676  344862 cri.go:89] found id: ""
	I1017 20:12:33.186685  344862 logs.go:282] 2 containers: [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:12:33.186766  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:33.191970  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:33.196617  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:12:33.196685  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:12:33.229611  344862 cri.go:89] found id: ""
	I1017 20:12:33.229645  344862 logs.go:282] 0 containers: []
	W1017 20:12:33.229658  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:12:33.229667  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:12:33.229725  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:12:33.265307  344862 cri.go:89] found id: ""
	I1017 20:12:33.265338  344862 logs.go:282] 0 containers: []
	W1017 20:12:33.265350  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:12:33.265369  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:12:33.265383  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:33.297987  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:33.298023  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:12:33.353126  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:12:33.353174  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:12:33.452063  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:12:33.452109  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:33.488222  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:33.488260  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:33.548524  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:12:33.548575  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:33.579282  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:33.579316  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:33.611863  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:33.611892  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:33.632079  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:12:33.632119  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1017 20:12:35.938402  344862 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.30626298s)
	W1017 20:12:35.938449  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:58430->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:58430->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1017 20:12:35.938459  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:12:35.938478  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	W1017 20:12:35.976669  344862 logs.go:130] failed kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca": Process exited with status 1
	stdout:
	
	stderr:
	E1017 20:12:35.974053    6601 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca\": container with ID starting with 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca not found: ID does not exist" containerID="9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	time="2025-10-17T20:12:35Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca\": container with ID starting with 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca not found: ID does not exist"
	 output: 
	** stderr ** 
	E1017 20:12:35.974053    6601 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca\": container with ID starting with 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca not found: ID does not exist" containerID="9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	time="2025-10-17T20:12:35Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca\": container with ID starting with 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca not found: ID does not exist"
	
	** /stderr **
	I1017 20:12:39.646693  383050 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:12:39.646806  383050 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:12:39.646950  383050 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:12:39.647044  383050 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 20:12:39.647138  383050 kubeadm.go:318] OS: Linux
	I1017 20:12:39.647223  383050 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:12:39.647340  383050 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:12:39.647427  383050 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:12:39.647487  383050 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:12:39.647546  383050 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:12:39.647625  383050 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:12:39.647716  383050 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:12:39.647812  383050 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 20:12:39.647921  383050 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:12:39.648055  383050 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:12:39.648169  383050 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:12:39.648253  383050 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 20:12:39.652516  383050 out.go:252]   - Generating certificates and keys ...
	I1017 20:12:39.652634  383050 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:12:39.652722  383050 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:12:39.652852  383050 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:12:39.652959  383050 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:12:39.653059  383050 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:12:39.653146  383050 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:12:39.653245  383050 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:12:39.653453  383050 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-563805 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:12:39.653544  383050 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:12:39.653751  383050 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-563805 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:12:39.653840  383050 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:12:39.653930  383050 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:12:39.654005  383050 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:12:39.654115  383050 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:12:39.654193  383050 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:12:39.654297  383050 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:12:39.654381  383050 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:12:39.654480  383050 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:12:39.654569  383050 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:12:39.654699  383050 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:12:39.654810  383050 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:12:39.657533  383050 out.go:252]   - Booting up control plane ...
	I1017 20:12:39.657641  383050 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:12:39.657707  383050 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:12:39.657818  383050 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:12:39.657957  383050 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:12:39.658091  383050 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:12:39.658245  383050 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:12:39.658401  383050 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:12:39.658471  383050 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:12:39.658678  383050 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:12:39.658870  383050 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:12:39.658965  383050 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000948661s
	I1017 20:12:39.659100  383050 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:12:39.659227  383050 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1017 20:12:39.659365  383050 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:12:39.659472  383050 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 20:12:39.659584  383050 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.551295546s
	I1017 20:12:39.659703  383050 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.978016549s
	I1017 20:12:39.659838  383050 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002032412s
	I1017 20:12:39.659994  383050 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:12:39.660141  383050 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:12:39.660232  383050 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:12:39.660495  383050 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-563805 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:12:39.660585  383050 kubeadm.go:318] [bootstrap-token] Using token: atcwb8.1ipdap3j28ki8vtx
	I1017 20:12:39.663779  383050 out.go:252]   - Configuring RBAC rules ...
	I1017 20:12:39.663902  383050 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:12:39.664008  383050 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:12:39.664240  383050 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:12:39.664416  383050 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:12:39.664558  383050 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:12:39.664672  383050 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:12:39.664861  383050 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:12:39.664924  383050 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:12:39.665007  383050 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:12:39.665023  383050 kubeadm.go:318] 
	I1017 20:12:39.665112  383050 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:12:39.665120  383050 kubeadm.go:318] 
	I1017 20:12:39.665234  383050 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:12:39.665242  383050 kubeadm.go:318] 
	I1017 20:12:39.665275  383050 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:12:39.665363  383050 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:12:39.665431  383050 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:12:39.665439  383050 kubeadm.go:318] 
	I1017 20:12:39.665533  383050 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:12:39.665546  383050 kubeadm.go:318] 
	I1017 20:12:39.665620  383050 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:12:39.665640  383050 kubeadm.go:318] 
	I1017 20:12:39.665715  383050 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:12:39.665853  383050 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:12:39.665942  383050 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:12:39.665951  383050 kubeadm.go:318] 
	I1017 20:12:39.666058  383050 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:12:39.666158  383050 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:12:39.666177  383050 kubeadm.go:318] 
	I1017 20:12:39.666283  383050 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token atcwb8.1ipdap3j28ki8vtx \
	I1017 20:12:39.666415  383050 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 \
	I1017 20:12:39.666455  383050 kubeadm.go:318] 	--control-plane 
	I1017 20:12:39.666464  383050 kubeadm.go:318] 
	I1017 20:12:39.666564  383050 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:12:39.666572  383050 kubeadm.go:318] 
	I1017 20:12:39.666683  383050 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token atcwb8.1ipdap3j28ki8vtx \
	I1017 20:12:39.666873  383050 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 
	I1017 20:12:39.666904  383050 cni.go:84] Creating CNI manager for ""
	I1017 20:12:39.666916  383050 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:39.669789  383050 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:12:39.671197  383050 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:12:39.677195  383050 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:12:39.677222  383050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:12:39.695422  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:12:39.980970  383050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:12:39.981087  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:39.981095  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-563805 minikube.k8s.io/updated_at=2025_10_17T20_12_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=default-k8s-diff-port-563805 minikube.k8s.io/primary=true
	I1017 20:12:39.994364  383050 ops.go:34] apiserver oom_adj: -16
	I1017 20:12:40.100076  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:38.476813  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:12:38.477309  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:12:38.477368  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:12:38.477430  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:12:38.508174  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:38.508204  344862 cri.go:89] found id: ""
	I1017 20:12:38.508214  344862 logs.go:282] 1 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5]
	I1017 20:12:38.508277  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:38.512911  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:12:38.513002  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:12:38.549161  344862 cri.go:89] found id: ""
	I1017 20:12:38.549193  344862 logs.go:282] 0 containers: []
	W1017 20:12:38.549204  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:12:38.549212  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:12:38.549283  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:12:38.585673  344862 cri.go:89] found id: ""
	I1017 20:12:38.585706  344862 logs.go:282] 0 containers: []
	W1017 20:12:38.585719  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:12:38.585728  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:12:38.585838  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:12:38.621093  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:38.621122  344862 cri.go:89] found id: ""
	I1017 20:12:38.621133  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:12:38.621199  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:38.627591  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:12:38.627781  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:12:38.673217  344862 cri.go:89] found id: ""
	I1017 20:12:38.673265  344862 logs.go:282] 0 containers: []
	W1017 20:12:38.673278  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:12:38.673286  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:12:38.673345  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:12:38.714908  344862 cri.go:89] found id: "a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:38.714935  344862 cri.go:89] found id: ""
	I1017 20:12:38.714979  344862 logs.go:282] 1 containers: [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54]
	I1017 20:12:38.715079  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:38.719272  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:12:38.719380  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:12:38.752784  344862 cri.go:89] found id: ""
	I1017 20:12:38.752818  344862 logs.go:282] 0 containers: []
	W1017 20:12:38.752830  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:12:38.752838  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:12:38.752896  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:12:38.791560  344862 cri.go:89] found id: ""
	I1017 20:12:38.791592  344862 logs.go:282] 0 containers: []
	W1017 20:12:38.791604  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:12:38.791615  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:12:38.791636  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:38.832252  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:38.832291  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:38.908334  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:12:38.908373  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:38.955736  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:38.956057  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:12:39.022858  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:39.022902  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:39.063816  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:12:39.063854  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:12:39.185336  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:39.185382  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:39.211843  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:12:39.211888  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:12:39.287199  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:12:41.788834  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:12:41.789347  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:12:41.789409  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:12:41.789475  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:12:41.821119  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:41.821147  344862 cri.go:89] found id: ""
	I1017 20:12:41.821158  344862 logs.go:282] 1 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5]
	I1017 20:12:41.821213  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:41.826366  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:12:41.826437  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:12:41.858584  344862 cri.go:89] found id: ""
	I1017 20:12:41.858617  344862 logs.go:282] 0 containers: []
	W1017 20:12:41.858630  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:12:41.858639  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:12:41.858697  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:12:41.891844  344862 cri.go:89] found id: ""
	I1017 20:12:41.891870  344862 logs.go:282] 0 containers: []
	W1017 20:12:41.891877  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:12:41.891893  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:12:41.891943  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:12:41.926231  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:41.926278  344862 cri.go:89] found id: ""
	I1017 20:12:41.926289  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:12:41.926360  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:41.931004  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:12:41.931067  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:12:41.959760  344862 cri.go:89] found id: ""
	I1017 20:12:41.959792  344862 logs.go:282] 0 containers: []
	W1017 20:12:41.959804  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:12:41.959813  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:12:41.959880  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:12:41.991948  344862 cri.go:89] found id: "a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:41.991967  344862 cri.go:89] found id: ""
	I1017 20:12:41.991975  344862 logs.go:282] 1 containers: [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54]
	I1017 20:12:41.992038  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:41.996497  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:12:41.996576  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:12:42.025652  344862 cri.go:89] found id: ""
	I1017 20:12:42.025676  344862 logs.go:282] 0 containers: []
	W1017 20:12:42.025682  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:12:42.025688  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:12:42.025753  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:12:42.053374  344862 cri.go:89] found id: ""
	I1017 20:12:42.053398  344862 logs.go:282] 0 containers: []
	W1017 20:12:42.053408  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:12:42.053417  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:12:42.053430  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:42.081468  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:42.081502  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:12:42.138471  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:42.138520  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:42.172691  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:12:42.172730  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:12:42.273904  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:42.273957  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:42.298647  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:12:42.298688  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:12:42.371323  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:12:42.371356  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:12:42.371372  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:42.409548  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:42.409585  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:43.677338  385034 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:12:43.677389  385034 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:12:43.677481  385034 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:12:43.677530  385034 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 20:12:43.677562  385034 kubeadm.go:318] OS: Linux
	I1017 20:12:43.677602  385034 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:12:43.677690  385034 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:12:43.677791  385034 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:12:43.677861  385034 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:12:43.677947  385034 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:12:43.678025  385034 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:12:43.678104  385034 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:12:43.678173  385034 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 20:12:43.678275  385034 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:12:43.678406  385034 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:12:43.678527  385034 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:12:43.678613  385034 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 20:12:43.680901  385034 out.go:252]   - Generating certificates and keys ...
	I1017 20:12:43.681045  385034 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:12:43.681171  385034 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:12:43.681258  385034 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:12:43.681346  385034 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:12:43.681462  385034 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:12:43.681542  385034 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:12:43.681620  385034 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:12:43.681839  385034 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-051083] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1017 20:12:43.681928  385034 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:12:43.682148  385034 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-051083] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1017 20:12:43.682241  385034 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:12:43.682338  385034 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:12:43.682414  385034 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:12:43.682498  385034 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:12:43.682591  385034 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:12:43.682660  385034 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:12:43.682785  385034 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:12:43.682913  385034 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:12:43.682995  385034 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:12:43.683127  385034 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:12:43.683209  385034 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:12:43.685163  385034 out.go:252]   - Booting up control plane ...
	I1017 20:12:43.685310  385034 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:12:43.685429  385034 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:12:43.685524  385034 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:12:43.685653  385034 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:12:43.685823  385034 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:12:43.685985  385034 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:12:43.686111  385034 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:12:43.686176  385034 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:12:43.686379  385034 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:12:43.686541  385034 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:12:43.686621  385034 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.404805ms
	I1017 20:12:43.686795  385034 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:12:43.686910  385034 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1017 20:12:43.687026  385034 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:12:43.687151  385034 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 20:12:43.687271  385034 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.825964783s
	I1017 20:12:43.687375  385034 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.230106296s
	I1017 20:12:43.687478  385034 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001732083s
	I1017 20:12:43.687602  385034 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:12:43.687814  385034 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:12:43.687937  385034 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:12:43.688122  385034 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-051083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:12:43.688175  385034 kubeadm.go:318] [bootstrap-token] Using token: btetdk.jlvvs0vi98tn7d4l
	I1017 20:12:43.689818  385034 out.go:252]   - Configuring RBAC rules ...
	I1017 20:12:43.689978  385034 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:12:43.690099  385034 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:12:43.690320  385034 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:12:43.690487  385034 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:12:43.690582  385034 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:12:43.690653  385034 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:12:43.690797  385034 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:12:43.690865  385034 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:12:43.690931  385034 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:12:43.690939  385034 kubeadm.go:318] 
	I1017 20:12:43.690986  385034 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:12:43.690995  385034 kubeadm.go:318] 
	I1017 20:12:43.691066  385034 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:12:43.691072  385034 kubeadm.go:318] 
	I1017 20:12:43.691093  385034 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:12:43.691146  385034 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:12:43.691188  385034 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:12:43.691194  385034 kubeadm.go:318] 
	I1017 20:12:43.691248  385034 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:12:43.691264  385034 kubeadm.go:318] 
	I1017 20:12:43.691314  385034 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:12:43.691320  385034 kubeadm.go:318] 
	I1017 20:12:43.691361  385034 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:12:43.691445  385034 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:12:43.691564  385034 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:12:43.691584  385034 kubeadm.go:318] 
	I1017 20:12:43.691698  385034 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:12:43.691842  385034 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:12:43.691853  385034 kubeadm.go:318] 
	I1017 20:12:43.691977  385034 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token btetdk.jlvvs0vi98tn7d4l \
	I1017 20:12:43.692171  385034 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 \
	I1017 20:12:43.692214  385034 kubeadm.go:318] 	--control-plane 
	I1017 20:12:43.692223  385034 kubeadm.go:318] 
	I1017 20:12:43.692363  385034 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:12:43.692377  385034 kubeadm.go:318] 
	I1017 20:12:43.692519  385034 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token btetdk.jlvvs0vi98tn7d4l \
	I1017 20:12:43.692682  385034 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 
	I1017 20:12:43.692701  385034 cni.go:84] Creating CNI manager for ""
	I1017 20:12:43.692710  385034 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:43.696901  385034 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 17 20:12:32 embed-certs-051488 crio[776]: time="2025-10-17T20:12:32.211228165Z" level=info msg="Starting container: 85ade69c1048e52191f9bc2d004d9ed92c83f21a322d40cb1eaca8e4f5fca0bd" id=be8dc9cc-688e-4543-bbd1-33e6f3f0a4e3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:12:32 embed-certs-051488 crio[776]: time="2025-10-17T20:12:32.213503497Z" level=info msg="Started container" PID=1843 containerID=85ade69c1048e52191f9bc2d004d9ed92c83f21a322d40cb1eaca8e4f5fca0bd description=kube-system/coredns-66bc5c9577-gq5dd/coredns id=be8dc9cc-688e-4543-bbd1-33e6f3f0a4e3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=50a4e04ed4c46eaec97432f1924191428fba0ef051d437c31f219ca7d448d820
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.295424502Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2bb9cf7b-3187-4853-b8e8-6bf8eb40ecbc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.295561672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.301356589Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5b9655d3e28dc97ae1c1ac03050136178513380d3997e8287853dbbc6d479ecc UID:b23380a0-e664-4975-ad28-1996f0687b6c NetNS:/var/run/netns/d950c145-e194-4ff0-947d-2c47f5853165 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a490}] Aliases:map[]}"
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.30139411Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.314777794Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5b9655d3e28dc97ae1c1ac03050136178513380d3997e8287853dbbc6d479ecc UID:b23380a0-e664-4975-ad28-1996f0687b6c NetNS:/var/run/netns/d950c145-e194-4ff0-947d-2c47f5853165 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a490}] Aliases:map[]}"
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.314984986Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.31601054Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.316869937Z" level=info msg="Ran pod sandbox 5b9655d3e28dc97ae1c1ac03050136178513380d3997e8287853dbbc6d479ecc with infra container: default/busybox/POD" id=2bb9cf7b-3187-4853-b8e8-6bf8eb40ecbc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.318208513Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99a117e5-9f6b-4762-8204-c6221a34041b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.318338794Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=99a117e5-9f6b-4762-8204-c6221a34041b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.318379901Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=99a117e5-9f6b-4762-8204-c6221a34041b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.31919052Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0cbda68b-d2b2-432d-b2d1-3baf81b8de96 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:12:35 embed-certs-051488 crio[776]: time="2025-10-17T20:12:35.322388959Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 20:12:37 embed-certs-051488 crio[776]: time="2025-10-17T20:12:37.378945526Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0cbda68b-d2b2-432d-b2d1-3baf81b8de96 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:12:37 embed-certs-051488 crio[776]: time="2025-10-17T20:12:37.379887389Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=756a8215-69a0-4504-9f13-d675770f81d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:37 embed-certs-051488 crio[776]: time="2025-10-17T20:12:37.381438276Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bbc00e58-dda9-429d-8e75-137af7096fe5 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:37 embed-certs-051488 crio[776]: time="2025-10-17T20:12:37.385361304Z" level=info msg="Creating container: default/busybox/busybox" id=583fb5a0-cbfe-4cb6-b708-bb87825f070d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:12:37 embed-certs-051488 crio[776]: time="2025-10-17T20:12:37.386303859Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:37 embed-certs-051488 crio[776]: time="2025-10-17T20:12:37.391382469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:37 embed-certs-051488 crio[776]: time="2025-10-17T20:12:37.39210141Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:37 embed-certs-051488 crio[776]: time="2025-10-17T20:12:37.432583061Z" level=info msg="Created container db1dc4f74050d679ba7cc03dfb48916a08a3d2f14f08f2586f3127625ba54257: default/busybox/busybox" id=583fb5a0-cbfe-4cb6-b708-bb87825f070d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:12:37 embed-certs-051488 crio[776]: time="2025-10-17T20:12:37.433796322Z" level=info msg="Starting container: db1dc4f74050d679ba7cc03dfb48916a08a3d2f14f08f2586f3127625ba54257" id=5ba43dcb-f5d0-4fe2-a126-ac3a9b93873e name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:12:37 embed-certs-051488 crio[776]: time="2025-10-17T20:12:37.437765484Z" level=info msg="Started container" PID=1917 containerID=db1dc4f74050d679ba7cc03dfb48916a08a3d2f14f08f2586f3127625ba54257 description=default/busybox/busybox id=5ba43dcb-f5d0-4fe2-a126-ac3a9b93873e name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b9655d3e28dc97ae1c1ac03050136178513380d3997e8287853dbbc6d479ecc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	db1dc4f74050d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   5b9655d3e28dc       busybox                                      default
	85ade69c1048e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   50a4e04ed4c46       coredns-66bc5c9577-gq5dd                     kube-system
	34817266f81b4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   6569aa42d5ee1       storage-provisioner                          kube-system
	5b90a6593f445       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   e372624d4fb79       kindnet-rzd8h                                kube-system
	98be1413065fa       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   217a308dd7ed5       kube-proxy-95wmw                             kube-system
	00e03eb2703c3       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   2a2d52aa1edce       kube-controller-manager-embed-certs-051488   kube-system
	b287f2e30d77e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   6a90d19f6d3c3       kube-scheduler-embed-certs-051488            kube-system
	acf25b3f8b3d0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   10ba4880ca683       etcd-embed-certs-051488                      kube-system
	e6111072b63f2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   a772c6f34b513       kube-apiserver-embed-certs-051488            kube-system
	
	
	==> coredns [85ade69c1048e52191f9bc2d004d9ed92c83f21a322d40cb1eaca8e4f5fca0bd] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40411 - 43307 "HINFO IN 1160527843595872658.4621890928593827959. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.126190321s
	
	
	==> describe nodes <==
	Name:               embed-certs-051488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-051488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=embed-certs-051488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_12_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:12:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-051488
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:12:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:12:44 +0000   Fri, 17 Oct 2025 20:12:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:12:44 +0000   Fri, 17 Oct 2025 20:12:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:12:44 +0000   Fri, 17 Oct 2025 20:12:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:12:44 +0000   Fri, 17 Oct 2025 20:12:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-051488
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                9303a0d5-fdd2-44db-b000-32ff1975a9e6
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-gq5dd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-051488                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-rzd8h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-051488             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-051488    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-95wmw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-051488             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node embed-certs-051488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node embed-certs-051488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node embed-certs-051488 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node embed-certs-051488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node embed-certs-051488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node embed-certs-051488 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node embed-certs-051488 event: Registered Node embed-certs-051488 in Controller
	  Normal  NodeReady                13s                kubelet          Node embed-certs-051488 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [acf25b3f8b3d074d5014fcbcfbe880428a94b47360c3e1fef09b946dfd9f37f1] <==
	{"level":"info","ts":"2025-10-17T20:12:19.999267Z","caller":"traceutil/trace.go:172","msg":"trace[2023273531] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"188.253957ms","start":"2025-10-17T20:12:19.811002Z","end":"2025-10-17T20:12:19.999256Z","steps":["trace[2023273531] 'process raft request'  (duration: 188.135154ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:19.999289Z","caller":"traceutil/trace.go:172","msg":"trace[979685435] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"188.14381ms","start":"2025-10-17T20:12:19.811134Z","end":"2025-10-17T20:12:19.999278Z","steps":["trace[979685435] 'process raft request'  (duration: 188.044629ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:19.999353Z","caller":"traceutil/trace.go:172","msg":"trace[526895729] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"188.171793ms","start":"2025-10-17T20:12:19.811161Z","end":"2025-10-17T20:12:19.999333Z","steps":["trace[526895729] 'process raft request'  (duration: 188.044059ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:19.999378Z","caller":"traceutil/trace.go:172","msg":"trace[55058406] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"184.466547ms","start":"2025-10-17T20:12:19.814906Z","end":"2025-10-17T20:12:19.999372Z","steps":["trace[55058406] 'process raft request'  (duration: 184.346805ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:19.999379Z","caller":"traceutil/trace.go:172","msg":"trace[1435646280] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"174.093307ms","start":"2025-10-17T20:12:19.825275Z","end":"2025-10-17T20:12:19.999369Z","steps":["trace[1435646280] 'process raft request'  (duration: 174.042872ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:20.128668Z","caller":"traceutil/trace.go:172","msg":"trace[440012061] linearizableReadLoop","detail":"{readStateIndex:384; appliedIndex:384; }","duration":"121.063308ms","start":"2025-10-17T20:12:20.007579Z","end":"2025-10-17T20:12:20.128642Z","steps":["trace[440012061] 'read index received'  (duration: 121.05209ms)","trace[440012061] 'applied index is now lower than readState.Index'  (duration: 9.806µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T20:12:20.187315Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.701945ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2025-10-17T20:12:20.187385Z","caller":"traceutil/trace.go:172","msg":"trace[1112639718] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:375; }","duration":"179.794656ms","start":"2025-10-17T20:12:20.007574Z","end":"2025-10-17T20:12:20.187368Z","steps":["trace[1112639718] 'agreement among raft nodes before linearized reading'  (duration: 121.197115ms)","trace[1112639718] 'range keys from in-memory index tree'  (duration: 58.402946ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:12:20.187442Z","caller":"traceutil/trace.go:172","msg":"trace[556953092] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"181.68457ms","start":"2025-10-17T20:12:20.005733Z","end":"2025-10-17T20:12:20.187417Z","steps":["trace[556953092] 'process raft request'  (duration: 122.946123ms)","trace[556953092] 'compare'  (duration: 58.498186ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:12:20.187488Z","caller":"traceutil/trace.go:172","msg":"trace[1653127393] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"181.576164ms","start":"2025-10-17T20:12:20.005895Z","end":"2025-10-17T20:12:20.187471Z","steps":["trace[1653127393] 'process raft request'  (duration: 181.450043ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:20.187592Z","caller":"traceutil/trace.go:172","msg":"trace[620602615] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"179.230755ms","start":"2025-10-17T20:12:20.008350Z","end":"2025-10-17T20:12:20.187581Z","steps":["trace[620602615] 'process raft request'  (duration: 179.09377ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:20.187615Z","caller":"traceutil/trace.go:172","msg":"trace[1302479939] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"177.450226ms","start":"2025-10-17T20:12:20.010154Z","end":"2025-10-17T20:12:20.187605Z","steps":["trace[1302479939] 'process raft request'  (duration: 177.391506ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:20.187800Z","caller":"traceutil/trace.go:172","msg":"trace[1653700697] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"179.009224ms","start":"2025-10-17T20:12:20.008779Z","end":"2025-10-17T20:12:20.187788Z","steps":["trace[1653700697] 'process raft request'  (duration: 178.736335ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:20.187805Z","caller":"traceutil/trace.go:172","msg":"trace[258117046] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"179.214698ms","start":"2025-10-17T20:12:20.008580Z","end":"2025-10-17T20:12:20.187794Z","steps":["trace[258117046] 'process raft request'  (duration: 178.905768ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T20:12:20.187935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.396485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4299"}
	{"level":"info","ts":"2025-10-17T20:12:20.187970Z","caller":"traceutil/trace.go:172","msg":"trace[550112777] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:381; }","duration":"177.441026ms","start":"2025-10-17T20:12:20.010519Z","end":"2025-10-17T20:12:20.187960Z","steps":["trace[550112777] 'agreement among raft nodes before linearized reading'  (duration: 177.025366ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:20.187977Z","caller":"traceutil/trace.go:172","msg":"trace[1079502976] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"177.441662ms","start":"2025-10-17T20:12:20.010526Z","end":"2025-10-17T20:12:20.187967Z","steps":["trace[1079502976] 'process raft request'  (duration: 177.055092ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:20.187594Z","caller":"traceutil/trace.go:172","msg":"trace[488949002] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"179.682628ms","start":"2025-10-17T20:12:20.007905Z","end":"2025-10-17T20:12:20.187587Z","steps":["trace[488949002] 'process raft request'  (duration: 179.490382ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:20.366465Z","caller":"traceutil/trace.go:172","msg":"trace[489800925] linearizableReadLoop","detail":"{readStateIndex:398; appliedIndex:398; }","duration":"133.06523ms","start":"2025-10-17T20:12:20.233376Z","end":"2025-10-17T20:12:20.366441Z","steps":["trace[489800925] 'read index received'  (duration: 133.054909ms)","trace[489800925] 'applied index is now lower than readState.Index'  (duration: 8.78µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:12:20.379156Z","caller":"traceutil/trace.go:172","msg":"trace[1721297632] transaction","detail":"{read_only:false; number_of_response:1; response_revision:389; }","duration":"162.147004ms","start":"2025-10-17T20:12:20.216990Z","end":"2025-10-17T20:12:20.379137Z","steps":["trace[1721297632] 'process raft request'  (duration: 149.594283ms)","trace[1721297632] 'compare'  (duration: 12.485198ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T20:12:20.379228Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.433006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-rzd8h\" limit:1 ","response":"range_response_count:1 size:3704"}
	{"level":"info","ts":"2025-10-17T20:12:20.379280Z","caller":"traceutil/trace.go:172","msg":"trace[1479198004] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-rzd8h; range_end:; response_count:1; response_revision:389; }","duration":"165.5045ms","start":"2025-10-17T20:12:20.213761Z","end":"2025-10-17T20:12:20.379266Z","steps":["trace[1479198004] 'agreement among raft nodes before linearized reading'  (duration: 152.784634ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T20:12:20.399791Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.617965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-10-17T20:12:20.399850Z","caller":"traceutil/trace.go:172","msg":"trace[1835661762] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:389; }","duration":"135.696493ms","start":"2025-10-17T20:12:20.264141Z","end":"2025-10-17T20:12:20.399838Z","steps":["trace[1835661762] 'agreement among raft nodes before linearized reading'  (duration: 135.479587ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:12:20.399854Z","caller":"traceutil/trace.go:172","msg":"trace[1808709848] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"162.375949ms","start":"2025-10-17T20:12:20.237464Z","end":"2025-10-17T20:12:20.399840Z","steps":["trace[1808709848] 'process raft request'  (duration: 162.190182ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:12:44 up  1:55,  0 user,  load average: 9.16, 5.14, 2.97
	Linux embed-certs-051488 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5b90a6593f445b7500e3fb4f22b117f17d7a0ac2c1a1b53c283a585b330c1dd8] <==
	I1017 20:12:21.040514       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:12:21.040817       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1017 20:12:21.040965       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:12:21.040982       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:12:21.041002       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:12:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:12:21.337420       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:12:21.337508       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:12:21.337523       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:12:21.337683       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:12:21.728536       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:12:21.728565       1 metrics.go:72] Registering metrics
	I1017 20:12:21.736343       1 controller.go:711] "Syncing nftables rules"
	I1017 20:12:31.338874       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:12:31.338942       1 main.go:301] handling current node
	I1017 20:12:41.339887       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:12:41.339930       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e6111072b63f2d3d186904a180accb3360b3aa9cdb9f49ac2f138f1d4016654d] <==
	I1017 20:12:11.380651       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:12:11.384171       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 20:12:11.384371       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:12:11.390264       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:12:11.392176       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:12:11.417213       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 20:12:11.564464       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:12:12.282935       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:12:12.288085       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:12:12.288104       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:12:12.938545       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:12:12.988878       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:12:13.091438       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:12:13.098132       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1017 20:12:13.099416       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:12:13.105575       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:12:13.331482       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:12:13.867553       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:12:13.879483       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:12:13.888818       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 20:12:19.334568       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:12:19.467732       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 20:12:19.627811       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:12:19.799588       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1017 20:12:43.090998       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:55964: use of closed network connection
	
	
	==> kube-controller-manager [00e03eb2703c3c49e435c4bb3d99d2f23aafb6af483e3d84c1c63c7682476265] <==
	I1017 20:12:18.300534       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:12:18.306780       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:12:18.329687       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 20:12:18.329878       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:12:18.330964       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:12:18.331014       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:12:18.331059       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:12:18.331066       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:12:18.331155       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:12:18.331194       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:12:18.331279       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:12:18.331298       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 20:12:18.331524       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:12:18.331847       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:12:18.332784       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:12:18.334478       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:12:18.334509       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:12:18.336186       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:12:18.342543       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 20:12:18.346858       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:12:18.351042       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:12:18.358416       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:12:18.366995       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:12:18.482810       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-051488" podCIDRs=["10.244.0.0/24"]
	I1017 20:12:33.281951       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [98be1413065fa4957ec027f26dac7174ffef8878f8718b13d253127ac2fe300e] <==
	I1017 20:12:20.845262       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:12:20.935677       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:12:21.036094       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:12:21.036140       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1017 20:12:21.036249       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:12:21.057185       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:12:21.057248       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:12:21.063114       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:12:21.063538       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:12:21.063583       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:12:21.065246       1 config.go:200] "Starting service config controller"
	I1017 20:12:21.065281       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:12:21.065332       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:12:21.065350       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:12:21.065357       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:12:21.065363       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:12:21.065433       1 config.go:309] "Starting node config controller"
	I1017 20:12:21.065494       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:12:21.065505       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:12:21.165504       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:12:21.165528       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:12:21.165547       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b287f2e30d77e375ee105ddf1351ea07e5e235b38a6db1fb287ad7047c743e38] <==
	E1017 20:12:11.600402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:12:11.600469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:12:11.600539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:12:11.600671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:12:11.600734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:12:11.600889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:12:11.600970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:12:11.601008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:12:11.601227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:12:11.601383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:12:11.601446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:12:11.601361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:12:11.601885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:12:11.602620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:12:11.602652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:12:12.410176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:12:12.516037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:12:12.538517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:12:12.557823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:12:12.636136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:12:12.659869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:12:12.667728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:12:12.706500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:12:12.937428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1017 20:12:15.486384       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:12:14 embed-certs-051488 kubelet[1324]: E1017 20:12:14.797422    1324 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-051488\" already exists" pod="kube-system/etcd-embed-certs-051488"
	Oct 17 20:12:14 embed-certs-051488 kubelet[1324]: I1017 20:12:14.823264    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-051488" podStartSLOduration=1.823238935 podStartE2EDuration="1.823238935s" podCreationTimestamp="2025-10-17 20:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:14.823047 +0000 UTC m=+1.193405370" watchObservedRunningTime="2025-10-17 20:12:14.823238935 +0000 UTC m=+1.193597306"
	Oct 17 20:12:14 embed-certs-051488 kubelet[1324]: I1017 20:12:14.840435    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-051488" podStartSLOduration=1.8403855999999998 podStartE2EDuration="1.8403856s" podCreationTimestamp="2025-10-17 20:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:14.838958269 +0000 UTC m=+1.209316639" watchObservedRunningTime="2025-10-17 20:12:14.8403856 +0000 UTC m=+1.210743962"
	Oct 17 20:12:14 embed-certs-051488 kubelet[1324]: I1017 20:12:14.873442    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-051488" podStartSLOduration=1.873414635 podStartE2EDuration="1.873414635s" podCreationTimestamp="2025-10-17 20:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:14.858983674 +0000 UTC m=+1.229342044" watchObservedRunningTime="2025-10-17 20:12:14.873414635 +0000 UTC m=+1.243773005"
	Oct 17 20:12:14 embed-certs-051488 kubelet[1324]: I1017 20:12:14.902904    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-051488" podStartSLOduration=1.902875464 podStartE2EDuration="1.902875464s" podCreationTimestamp="2025-10-17 20:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:14.874266367 +0000 UTC m=+1.244624737" watchObservedRunningTime="2025-10-17 20:12:14.902875464 +0000 UTC m=+1.273233834"
	Oct 17 20:12:18 embed-certs-051488 kubelet[1324]: I1017 20:12:18.485532    1324 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 20:12:18 embed-certs-051488 kubelet[1324]: I1017 20:12:18.486433    1324 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 20:12:20 embed-certs-051488 kubelet[1324]: I1017 20:12:20.160803    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51a10ca2-e69d-428e-9703-fdaa7b794cda-kube-proxy\") pod \"kube-proxy-95wmw\" (UID: \"51a10ca2-e69d-428e-9703-fdaa7b794cda\") " pod="kube-system/kube-proxy-95wmw"
	Oct 17 20:12:20 embed-certs-051488 kubelet[1324]: I1017 20:12:20.160861    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51a10ca2-e69d-428e-9703-fdaa7b794cda-lib-modules\") pod \"kube-proxy-95wmw\" (UID: \"51a10ca2-e69d-428e-9703-fdaa7b794cda\") " pod="kube-system/kube-proxy-95wmw"
	Oct 17 20:12:20 embed-certs-051488 kubelet[1324]: I1017 20:12:20.160893    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51a10ca2-e69d-428e-9703-fdaa7b794cda-xtables-lock\") pod \"kube-proxy-95wmw\" (UID: \"51a10ca2-e69d-428e-9703-fdaa7b794cda\") " pod="kube-system/kube-proxy-95wmw"
	Oct 17 20:12:20 embed-certs-051488 kubelet[1324]: I1017 20:12:20.160916    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v45s\" (UniqueName: \"kubernetes.io/projected/51a10ca2-e69d-428e-9703-fdaa7b794cda-kube-api-access-6v45s\") pod \"kube-proxy-95wmw\" (UID: \"51a10ca2-e69d-428e-9703-fdaa7b794cda\") " pod="kube-system/kube-proxy-95wmw"
	Oct 17 20:12:20 embed-certs-051488 kubelet[1324]: I1017 20:12:20.261953    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2175403d-fd55-45fd-8a79-62390167379e-cni-cfg\") pod \"kindnet-rzd8h\" (UID: \"2175403d-fd55-45fd-8a79-62390167379e\") " pod="kube-system/kindnet-rzd8h"
	Oct 17 20:12:20 embed-certs-051488 kubelet[1324]: I1017 20:12:20.262011    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2175403d-fd55-45fd-8a79-62390167379e-xtables-lock\") pod \"kindnet-rzd8h\" (UID: \"2175403d-fd55-45fd-8a79-62390167379e\") " pod="kube-system/kindnet-rzd8h"
	Oct 17 20:12:20 embed-certs-051488 kubelet[1324]: I1017 20:12:20.262087    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2175403d-fd55-45fd-8a79-62390167379e-lib-modules\") pod \"kindnet-rzd8h\" (UID: \"2175403d-fd55-45fd-8a79-62390167379e\") " pod="kube-system/kindnet-rzd8h"
	Oct 17 20:12:20 embed-certs-051488 kubelet[1324]: I1017 20:12:20.262156    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgx95\" (UniqueName: \"kubernetes.io/projected/2175403d-fd55-45fd-8a79-62390167379e-kube-api-access-zgx95\") pod \"kindnet-rzd8h\" (UID: \"2175403d-fd55-45fd-8a79-62390167379e\") " pod="kube-system/kindnet-rzd8h"
	Oct 17 20:12:21 embed-certs-051488 kubelet[1324]: I1017 20:12:21.795220    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-95wmw" podStartSLOduration=2.795197129 podStartE2EDuration="2.795197129s" podCreationTimestamp="2025-10-17 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:21.795145242 +0000 UTC m=+8.165503609" watchObservedRunningTime="2025-10-17 20:12:21.795197129 +0000 UTC m=+8.165555498"
	Oct 17 20:12:21 embed-certs-051488 kubelet[1324]: I1017 20:12:21.811935    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rzd8h" podStartSLOduration=2.811884208 podStartE2EDuration="2.811884208s" podCreationTimestamp="2025-10-17 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:21.810917999 +0000 UTC m=+8.181276369" watchObservedRunningTime="2025-10-17 20:12:21.811884208 +0000 UTC m=+8.182242573"
	Oct 17 20:12:31 embed-certs-051488 kubelet[1324]: I1017 20:12:31.820591    1324 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 20:12:31 embed-certs-051488 kubelet[1324]: I1017 20:12:31.947623    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7bgx\" (UniqueName: \"kubernetes.io/projected/4b66cc71-3175-46bd-93d2-28303821da56-kube-api-access-j7bgx\") pod \"storage-provisioner\" (UID: \"4b66cc71-3175-46bd-93d2-28303821da56\") " pod="kube-system/storage-provisioner"
	Oct 17 20:12:31 embed-certs-051488 kubelet[1324]: I1017 20:12:31.947691    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7wzn\" (UniqueName: \"kubernetes.io/projected/4c8aa324-2af3-4de9-9e87-e0d7c2049d50-kube-api-access-z7wzn\") pod \"coredns-66bc5c9577-gq5dd\" (UID: \"4c8aa324-2af3-4de9-9e87-e0d7c2049d50\") " pod="kube-system/coredns-66bc5c9577-gq5dd"
	Oct 17 20:12:31 embed-certs-051488 kubelet[1324]: I1017 20:12:31.947719    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4b66cc71-3175-46bd-93d2-28303821da56-tmp\") pod \"storage-provisioner\" (UID: \"4b66cc71-3175-46bd-93d2-28303821da56\") " pod="kube-system/storage-provisioner"
	Oct 17 20:12:31 embed-certs-051488 kubelet[1324]: I1017 20:12:31.947801    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c8aa324-2af3-4de9-9e87-e0d7c2049d50-config-volume\") pod \"coredns-66bc5c9577-gq5dd\" (UID: \"4c8aa324-2af3-4de9-9e87-e0d7c2049d50\") " pod="kube-system/coredns-66bc5c9577-gq5dd"
	Oct 17 20:12:32 embed-certs-051488 kubelet[1324]: I1017 20:12:32.837219    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gq5dd" podStartSLOduration=13.837193179 podStartE2EDuration="13.837193179s" podCreationTimestamp="2025-10-17 20:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:32.823656922 +0000 UTC m=+19.194015291" watchObservedRunningTime="2025-10-17 20:12:32.837193179 +0000 UTC m=+19.207551549"
	Oct 17 20:12:32 embed-certs-051488 kubelet[1324]: I1017 20:12:32.848884    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.848859267 podStartE2EDuration="12.848859267s" podCreationTimestamp="2025-10-17 20:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:32.837379583 +0000 UTC m=+19.207737946" watchObservedRunningTime="2025-10-17 20:12:32.848859267 +0000 UTC m=+19.219217638"
	Oct 17 20:12:35 embed-certs-051488 kubelet[1324]: I1017 20:12:35.067667    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7cg5\" (UniqueName: \"kubernetes.io/projected/b23380a0-e664-4975-ad28-1996f0687b6c-kube-api-access-x7cg5\") pod \"busybox\" (UID: \"b23380a0-e664-4975-ad28-1996f0687b6c\") " pod="default/busybox"
	
	
	==> storage-provisioner [34817266f81b40e2ba6c610c36774e0564f35cbd6500d83757d0e3f8b553b8c0] <==
	I1017 20:12:32.222225       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:12:32.231336       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:12:32.231382       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:12:32.234357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:32.240852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:12:32.241028       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:12:32.241165       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c795daa1-3cc4-4dc8-b9fb-3eec5780324d", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-051488_950d7636-1228-4450-9543-e3a9aa913a64 became leader
	I1017 20:12:32.241218       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-051488_950d7636-1228-4450-9543-e3a9aa913a64!
	W1017 20:12:32.243517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:32.248240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:12:32.341476       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-051488_950d7636-1228-4450-9543-e3a9aa913a64!
	W1017 20:12:34.251706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:34.256094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:36.260661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:36.267008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:38.271145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:38.275905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:40.279679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:40.289779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:42.293107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:42.297830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:44.304782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:44.316256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-051488 -n embed-certs-051488
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-051488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-051083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-051083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (245.103893ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:12:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-051083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-051083
helpers_test.go:243: (dbg) docker inspect newest-cni-051083:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3",
	        "Created": "2025-10-17T20:12:27.257340799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 386604,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:12:27.301021515Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/hosts",
	        "LogPath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3-json.log",
	        "Name": "/newest-cni-051083",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-051083:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-051083",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3",
	                "LowerDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-051083",
	                "Source": "/var/lib/docker/volumes/newest-cni-051083/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-051083",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-051083",
	                "name.minikube.sigs.k8s.io": "newest-cni-051083",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e482af42569d85cc5f61711c634d75728d5e9aa34b552e8692e56062e74de8d2",
	            "SandboxKey": "/var/run/docker/netns/e482af42569d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-051083": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:53:30:30:5b:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "42b465f0ccbda0e5ca1971c81bb13558e21d93dd3bfe9fc99a5609898791da62",
	                    "EndpointID": "cb1bc484ad7c47d5e66ea7d72aee865ffea982a6c3d05d60028ecec866327225",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-051083",
	                        "46e8db0f52af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051083 -n newest-cni-051083
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-051083 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-726816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p old-k8s-version-726816 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-726816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ start   │ -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-449580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	│ stop    │ -p no-preload-449580 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:11 UTC │
	│ addons  │ enable dashboard -p no-preload-449580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ image   │ old-k8s-version-726816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ pause   │ -p old-k8s-version-726816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p cert-expiration-202048 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-202048       │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ image   │ no-preload-449580 image list --format=json                                                                                                                                                                                                    │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ pause   │ -p no-preload-449580 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ delete  │ -p cert-expiration-202048                                                                                                                                                                                                                     │ cert-expiration-202048       │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p disable-driver-mounts-270495                                                                                                                                                                                                               │ disable-driver-mounts-270495 │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ delete  │ -p no-preload-449580                                                                                                                                                                                                                          │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p no-preload-449580                                                                                                                                                                                                                          │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ addons  │ enable metrics-server -p embed-certs-051488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ stop    │ -p embed-certs-051488 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-051083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:12:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:12:21.725677  385034 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:12:21.726029  385034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:12:21.726045  385034 out.go:374] Setting ErrFile to fd 2...
	I1017 20:12:21.726052  385034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:12:21.726377  385034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:12:21.727105  385034 out.go:368] Setting JSON to false
	I1017 20:12:21.728959  385034 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6890,"bootTime":1760725052,"procs":415,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:12:21.729105  385034 start.go:141] virtualization: kvm guest
	I1017 20:12:21.731854  385034 out.go:179] * [newest-cni-051083] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:12:21.733920  385034 notify.go:220] Checking for updates...
	I1017 20:12:21.733951  385034 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:12:21.735576  385034 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:12:21.738834  385034 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:12:21.740596  385034 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:12:21.742094  385034 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:12:21.743607  385034 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:12:21.749732  385034 config.go:182] Loaded profile config "default-k8s-diff-port-563805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:21.749914  385034 config.go:182] Loaded profile config "embed-certs-051488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:21.750050  385034 config.go:182] Loaded profile config "kubernetes-upgrade-660693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:21.750264  385034 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:12:21.786553  385034 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:12:21.786758  385034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:12:21.880731  385034 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-17 20:12:21.859518834 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:12:21.880978  385034 docker.go:318] overlay module found
	I1017 20:12:21.885601  385034 out.go:179] * Using the docker driver based on user configuration
	I1017 20:12:21.887545  385034 start.go:305] selected driver: docker
	I1017 20:12:21.887574  385034 start.go:925] validating driver "docker" against <nil>
	I1017 20:12:21.887595  385034 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:12:21.888459  385034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:12:21.960435  385034 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-17 20:12:21.948858112 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:12:21.960689  385034 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1017 20:12:21.960730  385034 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1017 20:12:21.961012  385034 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 20:12:21.965431  385034 out.go:179] * Using Docker driver with root privileges
	I1017 20:12:21.966974  385034 cni.go:84] Creating CNI manager for ""
	I1017 20:12:21.967045  385034 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:21.967053  385034 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:12:21.967149  385034 start.go:349] cluster config:
	{Name:newest-cni-051083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-051083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:21.968824  385034 out.go:179] * Starting "newest-cni-051083" primary control-plane node in "newest-cni-051083" cluster
	I1017 20:12:21.970240  385034 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:12:21.971682  385034 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:12:21.973978  385034 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:21.974038  385034 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:12:21.974060  385034 cache.go:58] Caching tarball of preloaded images
	I1017 20:12:21.974078  385034 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:12:21.974175  385034 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:12:21.974191  385034 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:12:21.974329  385034 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/config.json ...
	I1017 20:12:21.974358  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/config.json: {Name:mk32842e78c30269f7c8b87106cd69b1a95516bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:21.998214  385034 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:12:21.998242  385034 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:12:21.998265  385034 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:12:21.998298  385034 start.go:360] acquireMachinesLock for newest-cni-051083: {Name:mk40bc92590455b2d7e0a97cfb06b266ec3e9a76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:12:21.998627  385034 start.go:364] duration metric: took 304.911µs to acquireMachinesLock for "newest-cni-051083"
	I1017 20:12:21.998661  385034 start.go:93] Provisioning new machine with config: &{Name:newest-cni-051083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-051083 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:12:21.998763  385034 start.go:125] createHost starting for "" (driver="docker")
	I1017 20:12:20.546875  376518 addons.go:514] duration metric: took 1.284205009s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1017 20:12:22.548093  376518 node_ready.go:57] node "embed-certs-051488" has "Ready":"False" status (will retry)
	I1017 20:12:20.602124  383050 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-563805 --name default-k8s-diff-port-563805 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-563805 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-563805 --network default-k8s-diff-port-563805 --ip 192.168.85.2 --volume default-k8s-diff-port-563805:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:12:21.500362  383050 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Running}}
	I1017 20:12:21.521963  383050 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Status}}
	I1017 20:12:21.544229  383050 cli_runner.go:164] Run: docker exec default-k8s-diff-port-563805 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:12:21.594016  383050 oci.go:144] the created container "default-k8s-diff-port-563805" has a running status.
	I1017 20:12:21.594063  383050 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa...
	I1017 20:12:22.132896  383050 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:12:22.162335  383050 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Status}}
	I1017 20:12:22.183501  383050 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:12:22.183522  383050 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-563805 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:12:22.236663  383050 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Status}}
	I1017 20:12:22.256624  383050 machine.go:93] provisionDockerMachine start ...
	I1017 20:12:22.256733  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:22.279600  383050 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:22.279920  383050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 20:12:22.279941  383050 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:12:22.420288  383050 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-563805
	
	I1017 20:12:22.420324  383050 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-563805"
	I1017 20:12:22.420418  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:22.441516  383050 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:22.441734  383050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 20:12:22.441771  383050 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-563805 && echo "default-k8s-diff-port-563805" | sudo tee /etc/hostname
	I1017 20:12:22.596611  383050 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-563805
	
	I1017 20:12:22.596704  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:22.618632  383050 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:22.618929  383050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 20:12:22.618960  383050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-563805' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-563805/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-563805' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:12:22.759647  383050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:12:22.759675  383050 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:12:22.759718  383050 ubuntu.go:190] setting up certificates
	I1017 20:12:22.759730  383050 provision.go:84] configureAuth start
	I1017 20:12:22.759808  383050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-563805
	I1017 20:12:22.780461  383050 provision.go:143] copyHostCerts
	I1017 20:12:22.780526  383050 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:12:22.780538  383050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:12:22.780613  383050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:12:22.780734  383050 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:12:22.780762  383050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:12:22.780806  383050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:12:22.780873  383050 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:12:22.780882  383050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:12:22.780905  383050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:12:22.780959  383050 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-563805 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-563805 localhost minikube]
	I1017 20:12:23.308421  383050 provision.go:177] copyRemoteCerts
	I1017 20:12:23.308481  383050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:12:23.308519  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:23.329340  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:23.430009  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:12:23.454240  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1017 20:12:23.474625  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:12:23.494563  383050 provision.go:87] duration metric: took 734.813132ms to configureAuth
	I1017 20:12:23.494598  383050 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:12:23.494810  383050 config.go:182] Loaded profile config "default-k8s-diff-port-563805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:23.494933  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:23.514501  383050 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:23.514721  383050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 20:12:23.514754  383050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:12:23.784789  383050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:12:23.784822  383050 machine.go:96] duration metric: took 1.528170485s to provisionDockerMachine
	I1017 20:12:23.784836  383050 client.go:171] duration metric: took 8.025728223s to LocalClient.Create
	I1017 20:12:23.784861  383050 start.go:167] duration metric: took 8.025806742s to libmachine.API.Create "default-k8s-diff-port-563805"
	I1017 20:12:23.784871  383050 start.go:293] postStartSetup for "default-k8s-diff-port-563805" (driver="docker")
	I1017 20:12:23.784886  383050 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:12:23.784975  383050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:12:23.785027  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:23.805143  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:23.906510  383050 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:12:23.910673  383050 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:12:23.910705  383050 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:12:23.910718  383050 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:12:23.910809  383050 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:12:23.910919  383050 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:12:23.911060  383050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:12:23.920453  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:23.943290  383050 start.go:296] duration metric: took 158.401148ms for postStartSetup
	I1017 20:12:23.943789  383050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-563805
	I1017 20:12:23.963958  383050 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/config.json ...
	I1017 20:12:23.964304  383050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:12:23.964365  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:23.983513  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:24.078260  383050 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:12:24.083595  383050 start.go:128] duration metric: took 8.327293608s to createHost
	I1017 20:12:24.083623  383050 start.go:83] releasing machines lock for "default-k8s-diff-port-563805", held for 8.327481429s
	I1017 20:12:24.083703  383050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-563805
	I1017 20:12:24.102809  383050 ssh_runner.go:195] Run: cat /version.json
	I1017 20:12:24.102874  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:24.102809  383050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:12:24.102986  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:24.123425  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:24.126521  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:24.273375  383050 ssh_runner.go:195] Run: systemctl --version
	I1017 20:12:24.280419  383050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:12:24.320296  383050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:12:24.325274  383050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:12:24.325353  383050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:12:24.356730  383050 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 20:12:24.356795  383050 start.go:495] detecting cgroup driver to use...
	I1017 20:12:24.356834  383050 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:12:24.356880  383050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:12:24.376110  383050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:12:24.390456  383050 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:12:24.390528  383050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:12:24.408764  383050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:12:24.427265  383050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:12:24.515281  383050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:12:24.613761  383050 docker.go:234] disabling docker service ...
	I1017 20:12:24.613822  383050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:12:24.634044  383050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:12:24.647845  383050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:12:24.740254  383050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:12:24.827137  383050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:12:24.840030  383050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:12:24.855464  383050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:12:24.855529  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.871114  383050 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:12:24.871196  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.881929  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.892248  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.914674  383050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:12:24.924769  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.936804  383050 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:24.969687  383050 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:25.031085  383050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:12:25.039496  383050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:12:25.049051  383050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:25.138635  383050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:12:22.001657  385034 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:12:22.001925  385034 start.go:159] libmachine.API.Create for "newest-cni-051083" (driver="docker")
	I1017 20:12:22.001987  385034 client.go:168] LocalClient.Create starting
	I1017 20:12:22.002072  385034 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem
	I1017 20:12:22.002109  385034 main.go:141] libmachine: Decoding PEM data...
	I1017 20:12:22.002132  385034 main.go:141] libmachine: Parsing certificate...
	I1017 20:12:22.002196  385034 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem
	I1017 20:12:22.002220  385034 main.go:141] libmachine: Decoding PEM data...
	I1017 20:12:22.002235  385034 main.go:141] libmachine: Parsing certificate...
	I1017 20:12:22.002616  385034 cli_runner.go:164] Run: docker network inspect newest-cni-051083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:12:22.022402  385034 cli_runner.go:211] docker network inspect newest-cni-051083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:12:22.022472  385034 network_create.go:284] running [docker network inspect newest-cni-051083] to gather additional debugging logs...
	I1017 20:12:22.022492  385034 cli_runner.go:164] Run: docker network inspect newest-cni-051083
	W1017 20:12:22.041458  385034 cli_runner.go:211] docker network inspect newest-cni-051083 returned with exit code 1
	I1017 20:12:22.041497  385034 network_create.go:287] error running [docker network inspect newest-cni-051083]: docker network inspect newest-cni-051083: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-051083 not found
	I1017 20:12:22.041513  385034 network_create.go:289] output of [docker network inspect newest-cni-051083]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-051083 not found
	
	** /stderr **
	I1017 20:12:22.041603  385034 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:12:22.062104  385034 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d34a70da1174 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:b8:c9:c3:2e:b0} reservation:<nil>}
	I1017 20:12:22.062669  385034 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-07edace58173 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f3:28:2c:52:ce} reservation:<nil>}
	I1017 20:12:22.063211  385034 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a478249e8fe7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:51:65:8d:cb:60} reservation:<nil>}
	I1017 20:12:22.063791  385034 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7ed8ef1bc0a4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:6a:98:d7:e8:28} reservation:<nil>}
	I1017 20:12:22.064153  385034 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9a4aaba57340 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:16:30:99:20:8d:be} reservation:<nil>}
	I1017 20:12:22.064852  385034 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f65906aaca8c IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ba:86:9c:15:01:28} reservation:<nil>}
	I1017 20:12:22.065604  385034 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fbf0b0}
	I1017 20:12:22.065626  385034 network_create.go:124] attempt to create docker network newest-cni-051083 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1017 20:12:22.065690  385034 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-051083 newest-cni-051083
	I1017 20:12:22.128861  385034 network_create.go:108] docker network newest-cni-051083 192.168.103.0/24 created
	I1017 20:12:22.128902  385034 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-051083" container
	I1017 20:12:22.128977  385034 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:12:22.149037  385034 cli_runner.go:164] Run: docker volume create newest-cni-051083 --label name.minikube.sigs.k8s.io=newest-cni-051083 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:12:22.170567  385034 oci.go:103] Successfully created a docker volume newest-cni-051083
	I1017 20:12:22.170652  385034 cli_runner.go:164] Run: docker run --rm --name newest-cni-051083-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-051083 --entrypoint /usr/bin/test -v newest-cni-051083:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:12:22.613697  385034 oci.go:107] Successfully prepared a docker volume newest-cni-051083
	I1017 20:12:22.613779  385034 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:22.613821  385034 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:12:22.613900  385034 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-051083:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 20:12:27.263209  383050 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.124516507s)
	I1017 20:12:27.263248  383050 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:12:27.263304  383050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:12:27.268678  383050 start.go:563] Will wait 60s for crictl version
	I1017 20:12:27.268766  383050 ssh_runner.go:195] Run: which crictl
	I1017 20:12:27.273028  383050 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:12:27.302815  383050 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:12:27.302907  383050 ssh_runner.go:195] Run: crio --version
	I1017 20:12:27.336248  383050 ssh_runner.go:195] Run: crio --version
	I1017 20:12:27.368686  383050 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:12:25.188593  344862 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.066262578s)
	W1017 20:12:25.188711  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1017 20:12:25.188759  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:12:25.188789  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:25.223473  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:12:25.223509  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:25.255924  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:25.255958  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:25.311394  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:12:25.311435  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:25.340102  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:12:25.340136  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:25.368999  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:25.369030  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:25.402905  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:25.402940  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:25.423117  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:25.423159  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1017 20:12:24.548718  376518 node_ready.go:57] node "embed-certs-051488" has "Ready":"False" status (will retry)
	W1017 20:12:27.047437  376518 node_ready.go:57] node "embed-certs-051488" has "Ready":"False" status (will retry)
	I1017 20:12:27.370192  383050 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-563805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:12:27.394330  383050 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 20:12:27.399364  383050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:12:27.420924  383050 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-563805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-563805 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:12:27.421032  383050 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:27.421073  383050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:27.459232  383050 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:27.459256  383050 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:12:27.459303  383050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:27.489791  383050 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:27.489820  383050 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:12:27.489831  383050 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1017 20:12:27.489935  383050 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-563805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-563805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:12:27.490021  383050 ssh_runner.go:195] Run: crio config
	I1017 20:12:27.542209  383050 cni.go:84] Creating CNI manager for ""
	I1017 20:12:27.542243  383050 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:27.542263  383050 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:12:27.542300  383050 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-563805 NodeName:default-k8s-diff-port-563805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:12:27.542478  383050 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-563805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:12:27.542552  383050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:12:27.553709  383050 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:12:27.553787  383050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:12:27.563884  383050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1017 20:12:27.581408  383050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:12:27.600811  383050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1017 20:12:27.618137  383050 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:12:27.623043  383050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:12:27.636326  383050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:27.730914  383050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:12:27.757823  383050 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805 for IP: 192.168.85.2
	I1017 20:12:27.757849  383050 certs.go:195] generating shared ca certs ...
	I1017 20:12:27.757870  383050 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:27.758055  383050 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:12:27.758128  383050 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:12:27.758143  383050 certs.go:257] generating profile certs ...
	I1017 20:12:27.758218  383050 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.key
	I1017 20:12:27.758247  383050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.crt with IP's: []
	I1017 20:12:28.127258  383050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.crt ...
	I1017 20:12:28.127291  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.crt: {Name:mkdb4908d85bb0fbf42b54fea70a53f69c796a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.127460  383050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.key ...
	I1017 20:12:28.127474  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/client.key: {Name:mkf77a964dd11655a181747805acc9c537a9aba5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.127551  383050 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key.62088183
	I1017 20:12:28.127568  383050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt.62088183 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1017 20:12:28.210803  383050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt.62088183 ...
	I1017 20:12:28.210839  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt.62088183: {Name:mkeb4acf67adeb3a65d8f73c6ddca86fe7b0357f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.211024  383050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key.62088183 ...
	I1017 20:12:28.211049  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key.62088183: {Name:mk893006e82030ff0ae3f0128f0ea78a25344473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.211164  383050 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt.62088183 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt
	I1017 20:12:28.211293  383050 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key.62088183 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key
	I1017 20:12:28.211394  383050 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.key
	I1017 20:12:28.211429  383050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.crt with IP's: []
	I1017 20:12:28.272145  383050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.crt ...
	I1017 20:12:28.272175  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.crt: {Name:mkb68cb9add86d0869ff386211795c62543b4306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.272362  383050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.key ...
	I1017 20:12:28.272380  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.key: {Name:mk918e10ecf39d170a09ddecccfa035d26ac76ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:28.272605  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:12:28.272655  383050 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:12:28.272672  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:12:28.272708  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:12:28.272750  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:12:28.272782  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:12:28.272834  383050 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:28.273588  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:12:28.294125  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:12:28.315974  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:12:28.337732  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:12:28.359201  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:12:28.380227  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:12:28.404893  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:12:28.427352  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/default-k8s-diff-port-563805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:12:28.447015  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:12:28.468149  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:12:28.488289  383050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:12:28.508105  383050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:12:28.521912  383050 ssh_runner.go:195] Run: openssl version
	I1017 20:12:28.528546  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:12:28.538162  383050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:28.542620  383050 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:28.542677  383050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:28.580591  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:12:28.590293  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:12:28.599953  383050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:12:28.604290  383050 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:12:28.604343  383050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:12:28.641557  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:12:28.652033  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:12:28.662460  383050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:12:28.666958  383050 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:12:28.667022  383050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:12:28.704835  383050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:12:28.714396  383050 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:12:28.718390  383050 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:12:28.718444  383050 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-563805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-563805 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:28.718518  383050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:12:28.718581  383050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:12:28.749291  383050 cri.go:89] found id: ""
	I1017 20:12:28.749369  383050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:12:28.758528  383050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:12:28.766885  383050 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:12:28.766958  383050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:12:28.776822  383050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:12:28.776843  383050 kubeadm.go:157] found existing configuration files:
	
	I1017 20:12:28.776888  383050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1017 20:12:28.786121  383050 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:12:28.786191  383050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:12:28.794360  383050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1017 20:12:28.803982  383050 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:12:28.804043  383050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:12:28.812651  383050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1017 20:12:28.821854  383050 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:12:28.821919  383050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:12:28.830552  383050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1017 20:12:28.838790  383050 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:12:28.838858  383050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:12:28.847029  383050 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:12:28.913309  383050 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 20:12:28.982349  383050 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:12:27.169311  385034 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-051083:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.555366003s)
	I1017 20:12:27.169351  385034 kic.go:203] duration metric: took 4.555527609s to extract preloaded images to volume ...
	W1017 20:12:27.169455  385034 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 20:12:27.169496  385034 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 20:12:27.169533  385034 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:12:27.236703  385034 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-051083 --name newest-cni-051083 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-051083 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-051083 --network newest-cni-051083 --ip 192.168.103.2 --volume newest-cni-051083:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:12:27.553787  385034 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Running}}
	I1017 20:12:27.574846  385034 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Status}}
	I1017 20:12:27.595939  385034 cli_runner.go:164] Run: docker exec newest-cni-051083 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:12:27.649915  385034 oci.go:144] the created container "newest-cni-051083" has a running status.
	I1017 20:12:27.649947  385034 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa...
	I1017 20:12:28.284930  385034 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:12:28.313914  385034 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Status}}
	I1017 20:12:28.333060  385034 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:12:28.333080  385034 kic_runner.go:114] Args: [docker exec --privileged newest-cni-051083 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:12:28.385255  385034 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Status}}
	I1017 20:12:28.407256  385034 machine.go:93] provisionDockerMachine start ...
	I1017 20:12:28.407376  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:28.427479  385034 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:28.427824  385034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 20:12:28.427848  385034 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:12:28.563106  385034 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-051083
	
	I1017 20:12:28.563132  385034 ubuntu.go:182] provisioning hostname "newest-cni-051083"
	I1017 20:12:28.563202  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:28.582586  385034 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:28.582855  385034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 20:12:28.582871  385034 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-051083 && echo "newest-cni-051083" | sudo tee /etc/hostname
	I1017 20:12:28.732754  385034 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-051083
	
	I1017 20:12:28.732847  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:28.753126  385034 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:28.753395  385034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 20:12:28.753427  385034 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-051083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-051083/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-051083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:12:28.893011  385034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:12:28.893128  385034 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:12:28.893178  385034 ubuntu.go:190] setting up certificates
	I1017 20:12:28.893194  385034 provision.go:84] configureAuth start
	I1017 20:12:28.893265  385034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-051083
	I1017 20:12:28.912838  385034 provision.go:143] copyHostCerts
	I1017 20:12:28.912909  385034 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:12:28.912925  385034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:12:28.913000  385034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:12:28.913116  385034 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:12:28.913129  385034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:12:28.913166  385034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:12:28.913237  385034 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:12:28.913246  385034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:12:28.913279  385034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:12:28.913392  385034 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.newest-cni-051083 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-051083]
	I1017 20:12:29.209061  385034 provision.go:177] copyRemoteCerts
	I1017 20:12:29.209121  385034 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:12:29.209158  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:29.228992  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:29.327595  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:12:29.348800  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:12:29.368796  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:12:29.389209  385034 provision.go:87] duration metric: took 495.995104ms to configureAuth
	I1017 20:12:29.389238  385034 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:12:29.389448  385034 config.go:182] Loaded profile config "newest-cni-051083": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:29.389590  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:29.412619  385034 main.go:141] libmachine: Using SSH client type: native
	I1017 20:12:29.412928  385034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 20:12:29.412951  385034 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:12:29.666672  385034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:12:29.666699  385034 machine.go:96] duration metric: took 1.259413271s to provisionDockerMachine
	I1017 20:12:29.666708  385034 client.go:171] duration metric: took 7.664711687s to LocalClient.Create
	I1017 20:12:29.666726  385034 start.go:167] duration metric: took 7.664803946s to libmachine.API.Create "newest-cni-051083"
	I1017 20:12:29.666733  385034 start.go:293] postStartSetup for "newest-cni-051083" (driver="docker")
	I1017 20:12:29.666758  385034 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:12:29.666821  385034 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:12:29.666862  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:29.686112  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:29.787827  385034 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:12:29.791786  385034 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:12:29.791813  385034 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:12:29.791825  385034 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:12:29.791887  385034 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:12:29.792048  385034 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:12:29.792174  385034 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:12:29.800658  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:29.822552  385034 start.go:296] duration metric: took 155.802523ms for postStartSetup
	I1017 20:12:29.822998  385034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-051083
	I1017 20:12:29.842065  385034 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/config.json ...
	I1017 20:12:29.842426  385034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:12:29.842473  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:29.861604  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:29.956344  385034 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:12:29.961292  385034 start.go:128] duration metric: took 7.962509523s to createHost
	I1017 20:12:29.961321  385034 start.go:83] releasing machines lock for "newest-cni-051083", held for 7.962672012s
	I1017 20:12:29.961394  385034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-051083
	I1017 20:12:29.979784  385034 ssh_runner.go:195] Run: cat /version.json
	I1017 20:12:29.979845  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:29.979790  385034 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:12:29.979970  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:30.000190  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:30.000523  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:30.154335  385034 ssh_runner.go:195] Run: systemctl --version
	I1017 20:12:30.161686  385034 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:12:30.199115  385034 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:12:30.204135  385034 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:12:30.204208  385034 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:12:30.234013  385034 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 20:12:30.234043  385034 start.go:495] detecting cgroup driver to use...
	I1017 20:12:30.234083  385034 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:12:30.234136  385034 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:12:30.251847  385034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:12:30.266325  385034 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:12:30.266383  385034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:12:30.287093  385034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:12:30.306571  385034 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:12:30.393115  385034 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:12:30.483007  385034 docker.go:234] disabling docker service ...
	I1017 20:12:30.483095  385034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:12:30.503350  385034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:12:30.516907  385034 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:12:30.604508  385034 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:12:30.688801  385034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:12:30.703249  385034 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:12:30.719088  385034 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:12:30.719153  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.730628  385034 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:12:30.730700  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.741040  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.751209  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.761361  385034 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:12:30.770688  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.781029  385034 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.795877  385034 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:12:30.806752  385034 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:12:30.814868  385034 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:12:30.822981  385034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:30.903136  385034 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:12:31.021412  385034 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:12:31.021482  385034 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:12:31.025846  385034 start.go:563] Will wait 60s for crictl version
	I1017 20:12:31.025913  385034 ssh_runner.go:195] Run: which crictl
	I1017 20:12:31.030144  385034 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:12:31.058441  385034 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:12:31.058540  385034 ssh_runner.go:195] Run: crio --version
	I1017 20:12:31.088254  385034 ssh_runner.go:195] Run: crio --version
	I1017 20:12:31.122238  385034 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:12:31.123563  385034 cli_runner.go:164] Run: docker network inspect newest-cni-051083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:12:31.141767  385034 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1017 20:12:31.146295  385034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:12:31.159616  385034 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1017 20:12:31.161074  385034 kubeadm.go:883] updating cluster {Name:newest-cni-051083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-051083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:12:31.161232  385034 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:12:31.161336  385034 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:31.197112  385034 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:31.197140  385034 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:12:31.197204  385034 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:12:31.227269  385034 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:12:31.227295  385034 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:12:31.227302  385034 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1017 20:12:31.227388  385034 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-051083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-051083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:12:31.227465  385034 ssh_runner.go:195] Run: crio config
	I1017 20:12:31.273251  385034 cni.go:84] Creating CNI manager for ""
	I1017 20:12:31.273274  385034 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:31.273298  385034 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1017 20:12:31.273336  385034 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-051083 NodeName:newest-cni-051083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:12:31.273489  385034 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-051083"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:12:31.273567  385034 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:12:31.282502  385034 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:12:31.282575  385034 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:12:31.290669  385034 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 20:12:31.304514  385034 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:12:31.321025  385034 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1017 20:12:31.334506  385034 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:12:31.338622  385034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:12:31.350175  385034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:31.442920  385034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:12:31.466642  385034 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083 for IP: 192.168.103.2
	I1017 20:12:31.466674  385034 certs.go:195] generating shared ca certs ...
	I1017 20:12:31.466691  385034 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:31.466860  385034 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:12:31.466899  385034 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:12:31.466907  385034 certs.go:257] generating profile certs ...
	I1017 20:12:31.466978  385034 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.key
	I1017 20:12:31.467004  385034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.crt with IP's: []
	I1017 20:12:27.975308  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1017 20:12:29.048035  376518 node_ready.go:57] node "embed-certs-051488" has "Ready":"False" status (will retry)
	W1017 20:12:31.547841  376518 node_ready.go:57] node "embed-certs-051488" has "Ready":"False" status (will retry)
	I1017 20:12:32.047477  376518 node_ready.go:49] node "embed-certs-051488" is "Ready"
	I1017 20:12:32.047508  376518 node_ready.go:38] duration metric: took 11.503201874s for node "embed-certs-051488" to be "Ready" ...
	I1017 20:12:32.047523  376518 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:12:32.047580  376518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:12:32.062138  376518 api_server.go:72] duration metric: took 12.799508344s to wait for apiserver process to appear ...
	I1017 20:12:32.062165  376518 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:12:32.062196  376518 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1017 20:12:32.067509  376518 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1017 20:12:32.068553  376518 api_server.go:141] control plane version: v1.34.1
	I1017 20:12:32.068579  376518 api_server.go:131] duration metric: took 6.405116ms to wait for apiserver health ...
	I1017 20:12:32.068589  376518 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:12:32.072828  376518 system_pods.go:59] 8 kube-system pods found
	I1017 20:12:32.072911  376518 system_pods.go:61] "coredns-66bc5c9577-gq5dd" [4c8aa324-2af3-4de9-9e87-e0d7c2049d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:12:32.072921  376518 system_pods.go:61] "etcd-embed-certs-051488" [eaf5eefb-016e-480b-95f0-987e5398e403] Running
	I1017 20:12:32.072929  376518 system_pods.go:61] "kindnet-rzd8h" [2175403d-fd55-45fd-8a79-62390167379e] Running
	I1017 20:12:32.072934  376518 system_pods.go:61] "kube-apiserver-embed-certs-051488" [81cb7a64-9a96-49cb-87d6-0ca6b2a06ff4] Running
	I1017 20:12:32.072940  376518 system_pods.go:61] "kube-controller-manager-embed-certs-051488" [c7f3bc8e-83ba-4289-a6bf-f9e34608a227] Running
	I1017 20:12:32.072945  376518 system_pods.go:61] "kube-proxy-95wmw" [51a10ca2-e69d-428e-9703-fdaa7b794cda] Running
	I1017 20:12:32.072961  376518 system_pods.go:61] "kube-scheduler-embed-certs-051488" [4d83e37f-3923-41dc-9dd3-c24adfbddf62] Running
	I1017 20:12:32.072969  376518 system_pods.go:61] "storage-provisioner" [4b66cc71-3175-46bd-93d2-28303821da56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:12:32.072983  376518 system_pods.go:74] duration metric: took 4.381178ms to wait for pod list to return data ...
	I1017 20:12:32.072994  376518 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:12:32.075977  376518 default_sa.go:45] found service account: "default"
	I1017 20:12:32.076045  376518 default_sa.go:55] duration metric: took 3.038574ms for default service account to be created ...
	I1017 20:12:32.076071  376518 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:12:32.078929  376518 system_pods.go:86] 8 kube-system pods found
	I1017 20:12:32.078963  376518 system_pods.go:89] "coredns-66bc5c9577-gq5dd" [4c8aa324-2af3-4de9-9e87-e0d7c2049d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:12:32.078972  376518 system_pods.go:89] "etcd-embed-certs-051488" [eaf5eefb-016e-480b-95f0-987e5398e403] Running
	I1017 20:12:32.078979  376518 system_pods.go:89] "kindnet-rzd8h" [2175403d-fd55-45fd-8a79-62390167379e] Running
	I1017 20:12:32.078985  376518 system_pods.go:89] "kube-apiserver-embed-certs-051488" [81cb7a64-9a96-49cb-87d6-0ca6b2a06ff4] Running
	I1017 20:12:32.078995  376518 system_pods.go:89] "kube-controller-manager-embed-certs-051488" [c7f3bc8e-83ba-4289-a6bf-f9e34608a227] Running
	I1017 20:12:32.079000  376518 system_pods.go:89] "kube-proxy-95wmw" [51a10ca2-e69d-428e-9703-fdaa7b794cda] Running
	I1017 20:12:32.079006  376518 system_pods.go:89] "kube-scheduler-embed-certs-051488" [4d83e37f-3923-41dc-9dd3-c24adfbddf62] Running
	I1017 20:12:32.079014  376518 system_pods.go:89] "storage-provisioner" [4b66cc71-3175-46bd-93d2-28303821da56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:12:32.079041  376518 retry.go:31] will retry after 280.81201ms: missing components: kube-dns
	I1017 20:12:32.364977  376518 system_pods.go:86] 8 kube-system pods found
	I1017 20:12:32.365022  376518 system_pods.go:89] "coredns-66bc5c9577-gq5dd" [4c8aa324-2af3-4de9-9e87-e0d7c2049d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:12:32.365031  376518 system_pods.go:89] "etcd-embed-certs-051488" [eaf5eefb-016e-480b-95f0-987e5398e403] Running
	I1017 20:12:32.365038  376518 system_pods.go:89] "kindnet-rzd8h" [2175403d-fd55-45fd-8a79-62390167379e] Running
	I1017 20:12:32.365044  376518 system_pods.go:89] "kube-apiserver-embed-certs-051488" [81cb7a64-9a96-49cb-87d6-0ca6b2a06ff4] Running
	I1017 20:12:32.365055  376518 system_pods.go:89] "kube-controller-manager-embed-certs-051488" [c7f3bc8e-83ba-4289-a6bf-f9e34608a227] Running
	I1017 20:12:32.365060  376518 system_pods.go:89] "kube-proxy-95wmw" [51a10ca2-e69d-428e-9703-fdaa7b794cda] Running
	I1017 20:12:32.365068  376518 system_pods.go:89] "kube-scheduler-embed-certs-051488" [4d83e37f-3923-41dc-9dd3-c24adfbddf62] Running
	I1017 20:12:32.365078  376518 system_pods.go:89] "storage-provisioner" [4b66cc71-3175-46bd-93d2-28303821da56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:12:32.365096  376518 retry.go:31] will retry after 251.191698ms: missing components: kube-dns
	I1017 20:12:32.619941  376518 system_pods.go:86] 8 kube-system pods found
	I1017 20:12:32.619975  376518 system_pods.go:89] "coredns-66bc5c9577-gq5dd" [4c8aa324-2af3-4de9-9e87-e0d7c2049d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:12:32.619982  376518 system_pods.go:89] "etcd-embed-certs-051488" [eaf5eefb-016e-480b-95f0-987e5398e403] Running
	I1017 20:12:32.619988  376518 system_pods.go:89] "kindnet-rzd8h" [2175403d-fd55-45fd-8a79-62390167379e] Running
	I1017 20:12:32.619992  376518 system_pods.go:89] "kube-apiserver-embed-certs-051488" [81cb7a64-9a96-49cb-87d6-0ca6b2a06ff4] Running
	I1017 20:12:32.619997  376518 system_pods.go:89] "kube-controller-manager-embed-certs-051488" [c7f3bc8e-83ba-4289-a6bf-f9e34608a227] Running
	I1017 20:12:32.620002  376518 system_pods.go:89] "kube-proxy-95wmw" [51a10ca2-e69d-428e-9703-fdaa7b794cda] Running
	I1017 20:12:32.620006  376518 system_pods.go:89] "kube-scheduler-embed-certs-051488" [4d83e37f-3923-41dc-9dd3-c24adfbddf62] Running
	I1017 20:12:32.620013  376518 system_pods.go:89] "storage-provisioner" [4b66cc71-3175-46bd-93d2-28303821da56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:12:32.620038  376518 retry.go:31] will retry after 460.199391ms: missing components: kube-dns
	I1017 20:12:33.085972  376518 system_pods.go:86] 8 kube-system pods found
	I1017 20:12:33.086010  376518 system_pods.go:89] "coredns-66bc5c9577-gq5dd" [4c8aa324-2af3-4de9-9e87-e0d7c2049d50] Running
	I1017 20:12:33.086018  376518 system_pods.go:89] "etcd-embed-certs-051488" [eaf5eefb-016e-480b-95f0-987e5398e403] Running
	I1017 20:12:33.086024  376518 system_pods.go:89] "kindnet-rzd8h" [2175403d-fd55-45fd-8a79-62390167379e] Running
	I1017 20:12:33.086038  376518 system_pods.go:89] "kube-apiserver-embed-certs-051488" [81cb7a64-9a96-49cb-87d6-0ca6b2a06ff4] Running
	I1017 20:12:33.086045  376518 system_pods.go:89] "kube-controller-manager-embed-certs-051488" [c7f3bc8e-83ba-4289-a6bf-f9e34608a227] Running
	I1017 20:12:33.086051  376518 system_pods.go:89] "kube-proxy-95wmw" [51a10ca2-e69d-428e-9703-fdaa7b794cda] Running
	I1017 20:12:33.086059  376518 system_pods.go:89] "kube-scheduler-embed-certs-051488" [4d83e37f-3923-41dc-9dd3-c24adfbddf62] Running
	I1017 20:12:33.086064  376518 system_pods.go:89] "storage-provisioner" [4b66cc71-3175-46bd-93d2-28303821da56] Running
	I1017 20:12:33.086076  376518 system_pods.go:126] duration metric: took 1.009996656s to wait for k8s-apps to be running ...
	I1017 20:12:33.086086  376518 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:12:33.086146  376518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:12:33.104327  376518 system_svc.go:56] duration metric: took 18.230673ms WaitForService to wait for kubelet
	I1017 20:12:33.104360  376518 kubeadm.go:586] duration metric: took 13.841736715s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:12:33.104385  376518 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:12:33.108252  376518 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:12:33.108280  376518 node_conditions.go:123] node cpu capacity is 8
	I1017 20:12:33.108298  376518 node_conditions.go:105] duration metric: took 3.907208ms to run NodePressure ...
	I1017 20:12:33.108312  376518 start.go:241] waiting for startup goroutines ...
	I1017 20:12:33.108322  376518 start.go:246] waiting for cluster config update ...
	I1017 20:12:33.108335  376518 start.go:255] writing updated cluster config ...
	I1017 20:12:33.108629  376518 ssh_runner.go:195] Run: rm -f paused
	I1017 20:12:33.113094  376518 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:12:33.186119  376518 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gq5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.191579  376518 pod_ready.go:94] pod "coredns-66bc5c9577-gq5dd" is "Ready"
	I1017 20:12:33.191610  376518 pod_ready.go:86] duration metric: took 5.45888ms for pod "coredns-66bc5c9577-gq5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.194326  376518 pod_ready.go:83] waiting for pod "etcd-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.199109  376518 pod_ready.go:94] pod "etcd-embed-certs-051488" is "Ready"
	I1017 20:12:33.199132  376518 pod_ready.go:86] duration metric: took 4.779572ms for pod "etcd-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.201635  376518 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.206347  376518 pod_ready.go:94] pod "kube-apiserver-embed-certs-051488" is "Ready"
	I1017 20:12:33.206376  376518 pod_ready.go:86] duration metric: took 4.710188ms for pod "kube-apiserver-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.208800  376518 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.518162  376518 pod_ready.go:94] pod "kube-controller-manager-embed-certs-051488" is "Ready"
	I1017 20:12:33.518195  376518 pod_ready.go:86] duration metric: took 309.369744ms for pod "kube-controller-manager-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:33.717515  376518 pod_ready.go:83] waiting for pod "kube-proxy-95wmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:34.117998  376518 pod_ready.go:94] pod "kube-proxy-95wmw" is "Ready"
	I1017 20:12:34.118029  376518 pod_ready.go:86] duration metric: took 400.485054ms for pod "kube-proxy-95wmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:34.319260  376518 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:34.717708  376518 pod_ready.go:94] pod "kube-scheduler-embed-certs-051488" is "Ready"
	I1017 20:12:34.717767  376518 pod_ready.go:86] duration metric: took 398.475006ms for pod "kube-scheduler-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:12:34.717782  376518 pod_ready.go:40] duration metric: took 1.604651954s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:12:34.781701  376518 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:12:34.783860  376518 out.go:179] * Done! kubectl is now configured to use "embed-certs-051488" cluster and "default" namespace by default
	I1017 20:12:32.340835  385034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.crt ...
	I1017 20:12:32.340865  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.crt: {Name:mka29cf8226e58f8e6b43f5640866adcad75ebd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.341088  385034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.key ...
	I1017 20:12:32.341105  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/client.key: {Name:mk8a4693b49a6259eb801439094ac4a838948385 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.341236  385034 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key.17fdb1e4
	I1017 20:12:32.341260  385034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt.17fdb1e4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1017 20:12:32.507233  385034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt.17fdb1e4 ...
	I1017 20:12:32.507264  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt.17fdb1e4: {Name:mk14e5666de92b68a63d8c6419b53f312b0e5045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.507480  385034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key.17fdb1e4 ...
	I1017 20:12:32.507500  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key.17fdb1e4: {Name:mk4d02f46380b38214d5651d71bcd2f66de8b6f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.507642  385034 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt.17fdb1e4 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt
	I1017 20:12:32.507765  385034 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key.17fdb1e4 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key
	I1017 20:12:32.507856  385034 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.key
	I1017 20:12:32.507877  385034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.crt with IP's: []
	I1017 20:12:32.626565  385034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.crt ...
	I1017 20:12:32.626599  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.crt: {Name:mk042a89c7506fa7e4a67833571f34e3c5e2d196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.626831  385034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.key ...
	I1017 20:12:32.626856  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.key: {Name:mkb97a3c9230d4ffed1653eef5f7638dfd900392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:32.627090  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:12:32.627142  385034 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:12:32.627156  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:12:32.627193  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:12:32.627238  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:12:32.627272  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:12:32.627327  385034 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:12:32.627960  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:12:32.648057  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:12:32.667677  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:12:32.687134  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:12:32.707289  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 20:12:32.726519  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:12:32.745514  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:12:32.765897  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/newest-cni-051083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:12:32.785647  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:12:32.806857  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:12:32.831509  385034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:12:32.855874  385034 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:12:32.872706  385034 ssh_runner.go:195] Run: openssl version
	I1017 20:12:32.879303  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:12:32.889953  385034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:12:32.894531  385034 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:12:32.894605  385034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:12:32.933119  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:12:32.942857  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:12:32.951958  385034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:12:32.955942  385034 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:12:32.956015  385034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:12:32.991886  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:12:33.003592  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:12:33.013686  385034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:33.018130  385034 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:33.018226  385034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:12:33.055623  385034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:12:33.066672  385034 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:12:33.071496  385034 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:12:33.071565  385034 kubeadm.go:400] StartCluster: {Name:newest-cni-051083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-051083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:12:33.071652  385034 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:12:33.071712  385034 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:12:33.106040  385034 cri.go:89] found id: ""
	I1017 20:12:33.106108  385034 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:12:33.116767  385034 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:12:33.126409  385034 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:12:33.126485  385034 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:12:33.135879  385034 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:12:33.135906  385034 kubeadm.go:157] found existing configuration files:
	
	I1017 20:12:33.135959  385034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:12:33.148886  385034 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:12:33.148963  385034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:12:33.159983  385034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:12:33.168549  385034 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:12:33.168611  385034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:12:33.177655  385034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:12:33.188138  385034 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:12:33.188199  385034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:12:33.198426  385034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:12:33.208553  385034 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:12:33.208619  385034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:12:33.219205  385034 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:12:33.293460  385034 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 20:12:33.361102  385034 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:12:32.975954  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1017 20:12:32.976022  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:12:32.976106  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:12:33.005970  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:33.005998  344862 cri.go:89] found id: "9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	I1017 20:12:33.006003  344862 cri.go:89] found id: ""
	I1017 20:12:33.006040  344862 logs.go:282] 2 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]
	I1017 20:12:33.006123  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:33.010252  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:33.014502  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:12:33.014568  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:12:33.045586  344862 cri.go:89] found id: ""
	I1017 20:12:33.045618  344862 logs.go:282] 0 containers: []
	W1017 20:12:33.045630  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:12:33.045639  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:12:33.045700  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:12:33.076426  344862 cri.go:89] found id: ""
	I1017 20:12:33.076452  344862 logs.go:282] 0 containers: []
	W1017 20:12:33.076460  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:12:33.076466  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:12:33.076514  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:12:33.112313  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:33.112339  344862 cri.go:89] found id: ""
	I1017 20:12:33.112350  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:12:33.112407  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:33.117428  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:12:33.117494  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:12:33.155639  344862 cri.go:89] found id: ""
	I1017 20:12:33.155664  344862 logs.go:282] 0 containers: []
	W1017 20:12:33.155674  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:12:33.155682  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:12:33.155734  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:12:33.186649  344862 cri.go:89] found id: "a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:33.186671  344862 cri.go:89] found id: "8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:33.186676  344862 cri.go:89] found id: ""
	I1017 20:12:33.186685  344862 logs.go:282] 2 containers: [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2]
	I1017 20:12:33.186766  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:33.191970  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:33.196617  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:12:33.196685  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:12:33.229611  344862 cri.go:89] found id: ""
	I1017 20:12:33.229645  344862 logs.go:282] 0 containers: []
	W1017 20:12:33.229658  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:12:33.229667  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:12:33.229725  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:12:33.265307  344862 cri.go:89] found id: ""
	I1017 20:12:33.265338  344862 logs.go:282] 0 containers: []
	W1017 20:12:33.265350  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:12:33.265369  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:12:33.265383  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:33.297987  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:33.298023  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:12:33.353126  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:12:33.353174  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:12:33.452063  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:12:33.452109  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:33.488222  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:33.488260  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:33.548524  344862 logs.go:123] Gathering logs for kube-controller-manager [8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2] ...
	I1017 20:12:33.548575  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8369cb92da9687047506a3dffce9701c5ed66b2cef0a8d0a91a173d32a49dac2"
	I1017 20:12:33.579282  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:33.579316  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:33.611863  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:33.611892  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:33.632079  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:12:33.632119  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1017 20:12:35.938402  344862 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.30626298s)
	W1017 20:12:35.938449  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:58430->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:58430->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1017 20:12:35.938459  344862 logs.go:123] Gathering logs for kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca] ...
	I1017 20:12:35.938478  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	W1017 20:12:35.976669  344862 logs.go:130] failed kube-apiserver [9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca": Process exited with status 1
	stdout:
	
	stderr:
	E1017 20:12:35.974053    6601 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca\": container with ID starting with 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca not found: ID does not exist" containerID="9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	time="2025-10-17T20:12:35Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca\": container with ID starting with 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca not found: ID does not exist"
	 output: 
	** stderr ** 
	E1017 20:12:35.974053    6601 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca\": container with ID starting with 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca not found: ID does not exist" containerID="9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca"
	time="2025-10-17T20:12:35Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca\": container with ID starting with 9b60c5875e173b079bfa925082f33813994a87db261491fb57721b76a89d90ca not found: ID does not exist"
	
	** /stderr **
	I1017 20:12:39.646693  383050 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:12:39.646806  383050 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:12:39.646950  383050 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:12:39.647044  383050 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 20:12:39.647138  383050 kubeadm.go:318] OS: Linux
	I1017 20:12:39.647223  383050 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:12:39.647340  383050 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:12:39.647427  383050 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:12:39.647487  383050 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:12:39.647546  383050 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:12:39.647625  383050 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:12:39.647716  383050 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:12:39.647812  383050 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 20:12:39.647921  383050 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:12:39.648055  383050 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:12:39.648169  383050 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:12:39.648253  383050 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 20:12:39.652516  383050 out.go:252]   - Generating certificates and keys ...
	I1017 20:12:39.652634  383050 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:12:39.652722  383050 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:12:39.652852  383050 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:12:39.652959  383050 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:12:39.653059  383050 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:12:39.653146  383050 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:12:39.653245  383050 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:12:39.653453  383050 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-563805 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:12:39.653544  383050 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:12:39.653751  383050 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-563805 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:12:39.653840  383050 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:12:39.653930  383050 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:12:39.654005  383050 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:12:39.654115  383050 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:12:39.654193  383050 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:12:39.654297  383050 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:12:39.654381  383050 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:12:39.654480  383050 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:12:39.654569  383050 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:12:39.654699  383050 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:12:39.654810  383050 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:12:39.657533  383050 out.go:252]   - Booting up control plane ...
	I1017 20:12:39.657641  383050 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:12:39.657707  383050 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:12:39.657818  383050 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:12:39.657957  383050 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:12:39.658091  383050 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:12:39.658245  383050 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:12:39.658401  383050 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:12:39.658471  383050 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:12:39.658678  383050 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:12:39.658870  383050 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:12:39.658965  383050 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000948661s
	I1017 20:12:39.659100  383050 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:12:39.659227  383050 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1017 20:12:39.659365  383050 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:12:39.659472  383050 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 20:12:39.659584  383050 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.551295546s
	I1017 20:12:39.659703  383050 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.978016549s
	I1017 20:12:39.659838  383050 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002032412s
	I1017 20:12:39.659994  383050 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:12:39.660141  383050 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:12:39.660232  383050 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:12:39.660495  383050 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-563805 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:12:39.660585  383050 kubeadm.go:318] [bootstrap-token] Using token: atcwb8.1ipdap3j28ki8vtx
	I1017 20:12:39.663779  383050 out.go:252]   - Configuring RBAC rules ...
	I1017 20:12:39.663902  383050 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:12:39.664008  383050 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:12:39.664240  383050 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:12:39.664416  383050 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:12:39.664558  383050 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:12:39.664672  383050 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:12:39.664861  383050 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:12:39.664924  383050 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:12:39.665007  383050 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:12:39.665023  383050 kubeadm.go:318] 
	I1017 20:12:39.665112  383050 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:12:39.665120  383050 kubeadm.go:318] 
	I1017 20:12:39.665234  383050 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:12:39.665242  383050 kubeadm.go:318] 
	I1017 20:12:39.665275  383050 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:12:39.665363  383050 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:12:39.665431  383050 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:12:39.665439  383050 kubeadm.go:318] 
	I1017 20:12:39.665533  383050 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:12:39.665546  383050 kubeadm.go:318] 
	I1017 20:12:39.665620  383050 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:12:39.665640  383050 kubeadm.go:318] 
	I1017 20:12:39.665715  383050 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:12:39.665853  383050 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:12:39.665942  383050 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:12:39.665951  383050 kubeadm.go:318] 
	I1017 20:12:39.666058  383050 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:12:39.666158  383050 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:12:39.666177  383050 kubeadm.go:318] 
	I1017 20:12:39.666283  383050 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token atcwb8.1ipdap3j28ki8vtx \
	I1017 20:12:39.666415  383050 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 \
	I1017 20:12:39.666455  383050 kubeadm.go:318] 	--control-plane 
	I1017 20:12:39.666464  383050 kubeadm.go:318] 
	I1017 20:12:39.666564  383050 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:12:39.666572  383050 kubeadm.go:318] 
	I1017 20:12:39.666683  383050 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token atcwb8.1ipdap3j28ki8vtx \
	I1017 20:12:39.666873  383050 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 
	I1017 20:12:39.666904  383050 cni.go:84] Creating CNI manager for ""
	I1017 20:12:39.666916  383050 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:39.669789  383050 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:12:39.671197  383050 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:12:39.677195  383050 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:12:39.677222  383050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:12:39.695422  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:12:39.980970  383050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:12:39.981087  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:39.981095  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-563805 minikube.k8s.io/updated_at=2025_10_17T20_12_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=default-k8s-diff-port-563805 minikube.k8s.io/primary=true
	I1017 20:12:39.994364  383050 ops.go:34] apiserver oom_adj: -16
	I1017 20:12:40.100076  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:38.476813  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:12:38.477309  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:12:38.477368  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:12:38.477430  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:12:38.508174  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:38.508204  344862 cri.go:89] found id: ""
	I1017 20:12:38.508214  344862 logs.go:282] 1 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5]
	I1017 20:12:38.508277  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:38.512911  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:12:38.513002  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:12:38.549161  344862 cri.go:89] found id: ""
	I1017 20:12:38.549193  344862 logs.go:282] 0 containers: []
	W1017 20:12:38.549204  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:12:38.549212  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:12:38.549283  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:12:38.585673  344862 cri.go:89] found id: ""
	I1017 20:12:38.585706  344862 logs.go:282] 0 containers: []
	W1017 20:12:38.585719  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:12:38.585728  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:12:38.585838  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:12:38.621093  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:38.621122  344862 cri.go:89] found id: ""
	I1017 20:12:38.621133  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:12:38.621199  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:38.627591  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:12:38.627781  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:12:38.673217  344862 cri.go:89] found id: ""
	I1017 20:12:38.673265  344862 logs.go:282] 0 containers: []
	W1017 20:12:38.673278  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:12:38.673286  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:12:38.673345  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:12:38.714908  344862 cri.go:89] found id: "a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:38.714935  344862 cri.go:89] found id: ""
	I1017 20:12:38.714979  344862 logs.go:282] 1 containers: [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54]
	I1017 20:12:38.715079  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:38.719272  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:12:38.719380  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:12:38.752784  344862 cri.go:89] found id: ""
	I1017 20:12:38.752818  344862 logs.go:282] 0 containers: []
	W1017 20:12:38.752830  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:12:38.752838  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:12:38.752896  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:12:38.791560  344862 cri.go:89] found id: ""
	I1017 20:12:38.791592  344862 logs.go:282] 0 containers: []
	W1017 20:12:38.791604  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:12:38.791615  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:12:38.791636  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:38.832252  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:38.832291  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:38.908334  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:12:38.908373  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:38.955736  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:38.956057  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:12:39.022858  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:39.022902  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:39.063816  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:12:39.063854  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:12:39.185336  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:39.185382  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:39.211843  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:12:39.211888  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:12:39.287199  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:12:41.788834  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:12:41.789347  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:12:41.789409  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:12:41.789475  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:12:41.821119  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:41.821147  344862 cri.go:89] found id: ""
	I1017 20:12:41.821158  344862 logs.go:282] 1 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5]
	I1017 20:12:41.821213  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:41.826366  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:12:41.826437  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:12:41.858584  344862 cri.go:89] found id: ""
	I1017 20:12:41.858617  344862 logs.go:282] 0 containers: []
	W1017 20:12:41.858630  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:12:41.858639  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:12:41.858697  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:12:41.891844  344862 cri.go:89] found id: ""
	I1017 20:12:41.891870  344862 logs.go:282] 0 containers: []
	W1017 20:12:41.891877  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:12:41.891893  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:12:41.891943  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:12:41.926231  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:41.926278  344862 cri.go:89] found id: ""
	I1017 20:12:41.926289  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:12:41.926360  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:41.931004  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:12:41.931067  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:12:41.959760  344862 cri.go:89] found id: ""
	I1017 20:12:41.959792  344862 logs.go:282] 0 containers: []
	W1017 20:12:41.959804  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:12:41.959813  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:12:41.959880  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:12:41.991948  344862 cri.go:89] found id: "a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:41.991967  344862 cri.go:89] found id: ""
	I1017 20:12:41.991975  344862 logs.go:282] 1 containers: [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54]
	I1017 20:12:41.992038  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:41.996497  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:12:41.996576  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:12:42.025652  344862 cri.go:89] found id: ""
	I1017 20:12:42.025676  344862 logs.go:282] 0 containers: []
	W1017 20:12:42.025682  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:12:42.025688  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:12:42.025753  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:12:42.053374  344862 cri.go:89] found id: ""
	I1017 20:12:42.053398  344862 logs.go:282] 0 containers: []
	W1017 20:12:42.053408  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:12:42.053417  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:12:42.053430  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:42.081468  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:42.081502  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:12:42.138471  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:42.138520  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:42.172691  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:12:42.172730  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:12:42.273904  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:42.273957  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:42.298647  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:12:42.298688  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:12:42.371323  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:12:42.371356  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:12:42.371372  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:42.409548  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:42.409585  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:43.677338  385034 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:12:43.677389  385034 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:12:43.677481  385034 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:12:43.677530  385034 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 20:12:43.677562  385034 kubeadm.go:318] OS: Linux
	I1017 20:12:43.677602  385034 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:12:43.677690  385034 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:12:43.677791  385034 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:12:43.677861  385034 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:12:43.677947  385034 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:12:43.678025  385034 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:12:43.678104  385034 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:12:43.678173  385034 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 20:12:43.678275  385034 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:12:43.678406  385034 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:12:43.678527  385034 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:12:43.678613  385034 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 20:12:43.680901  385034 out.go:252]   - Generating certificates and keys ...
	I1017 20:12:43.681045  385034 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:12:43.681171  385034 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:12:43.681258  385034 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:12:43.681346  385034 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:12:43.681462  385034 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:12:43.681542  385034 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:12:43.681620  385034 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:12:43.681839  385034 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-051083] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1017 20:12:43.681928  385034 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:12:43.682148  385034 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-051083] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1017 20:12:43.682241  385034 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:12:43.682338  385034 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:12:43.682414  385034 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:12:43.682498  385034 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:12:43.682591  385034 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:12:43.682660  385034 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:12:43.682785  385034 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:12:43.682913  385034 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:12:43.682995  385034 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:12:43.683127  385034 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:12:43.683209  385034 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:12:43.685163  385034 out.go:252]   - Booting up control plane ...
	I1017 20:12:43.685310  385034 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:12:43.685429  385034 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:12:43.685524  385034 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:12:43.685653  385034 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:12:43.685823  385034 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:12:43.685985  385034 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:12:43.686111  385034 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:12:43.686176  385034 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:12:43.686379  385034 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:12:43.686541  385034 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:12:43.686621  385034 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.404805ms
	I1017 20:12:43.686795  385034 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:12:43.686910  385034 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1017 20:12:43.687026  385034 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:12:43.687151  385034 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 20:12:43.687271  385034 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.825964783s
	I1017 20:12:43.687375  385034 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.230106296s
	I1017 20:12:43.687478  385034 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001732083s
	I1017 20:12:43.687602  385034 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:12:43.687814  385034 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:12:43.687937  385034 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:12:43.688122  385034 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-051083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:12:43.688175  385034 kubeadm.go:318] [bootstrap-token] Using token: btetdk.jlvvs0vi98tn7d4l
	I1017 20:12:43.689818  385034 out.go:252]   - Configuring RBAC rules ...
	I1017 20:12:43.689978  385034 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:12:43.690099  385034 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:12:43.690320  385034 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:12:43.690487  385034 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:12:43.690582  385034 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:12:43.690653  385034 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:12:43.690797  385034 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:12:43.690865  385034 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:12:43.690931  385034 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:12:43.690939  385034 kubeadm.go:318] 
	I1017 20:12:43.690986  385034 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:12:43.690995  385034 kubeadm.go:318] 
	I1017 20:12:43.691066  385034 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:12:43.691072  385034 kubeadm.go:318] 
	I1017 20:12:43.691093  385034 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:12:43.691146  385034 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:12:43.691188  385034 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:12:43.691194  385034 kubeadm.go:318] 
	I1017 20:12:43.691248  385034 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:12:43.691264  385034 kubeadm.go:318] 
	I1017 20:12:43.691314  385034 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:12:43.691320  385034 kubeadm.go:318] 
	I1017 20:12:43.691361  385034 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:12:43.691445  385034 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:12:43.691564  385034 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:12:43.691584  385034 kubeadm.go:318] 
	I1017 20:12:43.691698  385034 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:12:43.691842  385034 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:12:43.691853  385034 kubeadm.go:318] 
	I1017 20:12:43.691977  385034 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token btetdk.jlvvs0vi98tn7d4l \
	I1017 20:12:43.692171  385034 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 \
	I1017 20:12:43.692214  385034 kubeadm.go:318] 	--control-plane 
	I1017 20:12:43.692223  385034 kubeadm.go:318] 
	I1017 20:12:43.692363  385034 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:12:43.692377  385034 kubeadm.go:318] 
	I1017 20:12:43.692519  385034 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token btetdk.jlvvs0vi98tn7d4l \
	I1017 20:12:43.692682  385034 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 
	I1017 20:12:43.692701  385034 cni.go:84] Creating CNI manager for ""
	I1017 20:12:43.692710  385034 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:12:43.696901  385034 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:12:40.600795  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:41.101180  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:41.601145  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:42.100217  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:42.600945  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:43.100968  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:43.600955  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:44.100795  383050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:44.193599  383050 kubeadm.go:1113] duration metric: took 4.212583495s to wait for elevateKubeSystemPrivileges
	I1017 20:12:44.193645  383050 kubeadm.go:402] duration metric: took 15.475205284s to StartCluster
	I1017 20:12:44.193673  383050 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:44.193784  383050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:12:44.195682  383050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:44.195959  383050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:12:44.195974  383050 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:12:44.196041  383050 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:12:44.196128  383050 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-563805"
	I1017 20:12:44.196157  383050 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-563805"
	I1017 20:12:44.196165  383050 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-563805"
	I1017 20:12:44.196172  383050 config.go:182] Loaded profile config "default-k8s-diff-port-563805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:44.196189  383050 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-563805"
	I1017 20:12:44.196193  383050 host.go:66] Checking if "default-k8s-diff-port-563805" exists ...
	I1017 20:12:44.196585  383050 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Status}}
	I1017 20:12:44.196986  383050 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Status}}
	I1017 20:12:44.198507  383050 out.go:179] * Verifying Kubernetes components...
	I1017 20:12:44.200346  383050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:44.222888  383050 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-563805"
	I1017 20:12:44.222953  383050 host.go:66] Checking if "default-k8s-diff-port-563805" exists ...
	I1017 20:12:44.223468  383050 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Status}}
	I1017 20:12:44.224922  383050 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:12:44.226878  383050 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:12:44.226902  383050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:12:44.226965  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:44.255066  383050 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:12:44.255095  383050 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:12:44.255156  383050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:12:44.265992  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:44.291327  383050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:12:44.324308  383050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:12:44.390808  383050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:12:44.422037  383050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:12:44.428024  383050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:12:44.540181  383050 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1017 20:12:44.541859  383050 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-563805" to be "Ready" ...
	I1017 20:12:44.802663  383050 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 20:12:44.803961  383050 addons.go:514] duration metric: took 607.911045ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 20:12:45.048868  383050 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-563805" context rescaled to 1 replicas
	I1017 20:12:43.700965  385034 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:12:43.707548  385034 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:12:43.707574  385034 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:12:43.727499  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:12:44.045091  385034 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:12:44.045287  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:44.045379  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-051083 minikube.k8s.io/updated_at=2025_10_17T20_12_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=newest-cni-051083 minikube.k8s.io/primary=true
	I1017 20:12:44.160165  385034 ops.go:34] apiserver oom_adj: -16
	I1017 20:12:44.160331  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:44.661087  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:45.160473  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:45.661145  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:46.160715  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:46.660426  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:44.967838  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:12:44.968311  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:12:44.968381  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:12:44.968445  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:12:44.999264  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:44.999305  344862 cri.go:89] found id: ""
	I1017 20:12:44.999315  344862 logs.go:282] 1 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5]
	I1017 20:12:44.999378  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:45.004229  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:12:45.004303  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:12:45.040122  344862 cri.go:89] found id: ""
	I1017 20:12:45.040158  344862 logs.go:282] 0 containers: []
	W1017 20:12:45.040170  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:12:45.040180  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:12:45.040253  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:12:45.076912  344862 cri.go:89] found id: ""
	I1017 20:12:45.076941  344862 logs.go:282] 0 containers: []
	W1017 20:12:45.076953  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:12:45.076961  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:12:45.077018  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:12:45.110634  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:45.110657  344862 cri.go:89] found id: ""
	I1017 20:12:45.110666  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:12:45.110725  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:45.115522  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:12:45.115595  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:12:45.149163  344862 cri.go:89] found id: ""
	I1017 20:12:45.149191  344862 logs.go:282] 0 containers: []
	W1017 20:12:45.149203  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:12:45.149211  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:12:45.149281  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:12:45.191479  344862 cri.go:89] found id: "a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:45.191522  344862 cri.go:89] found id: ""
	I1017 20:12:45.191533  344862 logs.go:282] 1 containers: [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54]
	I1017 20:12:45.191646  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:12:45.197199  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:12:45.197319  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:12:45.232189  344862 cri.go:89] found id: ""
	I1017 20:12:45.232273  344862 logs.go:282] 0 containers: []
	W1017 20:12:45.232289  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:12:45.232298  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:12:45.232407  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:12:45.272035  344862 cri.go:89] found id: ""
	I1017 20:12:45.272063  344862 logs.go:282] 0 containers: []
	W1017 20:12:45.272075  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:12:45.272087  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:12:45.272105  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:12:45.309729  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:12:45.309784  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:12:45.375096  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:12:45.375132  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:12:45.405098  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:12:45.405134  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:12:45.456907  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:12:45.456945  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:12:45.491428  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:12:45.491460  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:12:45.603601  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:12:45.603650  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:12:45.626888  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:12:45.626933  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:12:45.704576  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:12:47.160989  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:47.660519  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:48.160723  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:48.660476  385034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:12:48.737119  385034 kubeadm.go:1113] duration metric: took 4.691879821s to wait for elevateKubeSystemPrivileges
	I1017 20:12:48.737155  385034 kubeadm.go:402] duration metric: took 15.665596242s to StartCluster
	I1017 20:12:48.737180  385034 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:48.737296  385034 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:12:48.740189  385034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:12:48.740457  385034 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:12:48.740573  385034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:12:48.740606  385034 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:12:48.740699  385034 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-051083"
	I1017 20:12:48.740727  385034 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-051083"
	I1017 20:12:48.741323  385034 host.go:66] Checking if "newest-cni-051083" exists ...
	I1017 20:12:48.741826  385034 config.go:182] Loaded profile config "newest-cni-051083": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:12:48.743274  385034 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Status}}
	I1017 20:12:48.743382  385034 addons.go:69] Setting default-storageclass=true in profile "newest-cni-051083"
	I1017 20:12:48.743432  385034 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-051083"
	I1017 20:12:48.743654  385034 out.go:179] * Verifying Kubernetes components...
	I1017 20:12:48.743935  385034 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Status}}
	I1017 20:12:48.747028  385034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:12:48.775384  385034 addons.go:238] Setting addon default-storageclass=true in "newest-cni-051083"
	I1017 20:12:48.775481  385034 host.go:66] Checking if "newest-cni-051083" exists ...
	I1017 20:12:48.775718  385034 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:12:48.775962  385034 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Status}}
	I1017 20:12:48.777437  385034 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:12:48.777460  385034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:12:48.777520  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:48.800438  385034 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:12:48.800463  385034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:12:48.800536  385034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:12:48.807461  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:48.835597  385034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:12:48.841545  385034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:12:48.916885  385034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:12:48.930637  385034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:12:48.953757  385034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:12:49.053199  385034 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1017 20:12:49.054999  385034 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:12:49.055067  385034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:12:49.260564  385034 api_server.go:72] duration metric: took 520.06957ms to wait for apiserver process to appear ...
	I1017 20:12:49.260595  385034 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:12:49.260616  385034 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:12:49.266255  385034 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 20:12:49.267388  385034 api_server.go:141] control plane version: v1.34.1
	I1017 20:12:49.267414  385034 api_server.go:131] duration metric: took 6.811636ms to wait for apiserver health ...
	I1017 20:12:49.267424  385034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:12:49.268294  385034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 20:12:49.269592  385034 addons.go:514] duration metric: took 528.987498ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 20:12:49.270371  385034 system_pods.go:59] 8 kube-system pods found
	I1017 20:12:49.270404  385034 system_pods.go:61] "coredns-66bc5c9577-26q6r" [9f41e0e1-0ec5-4641-89b1-0c3489fd8ded] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:12:49.270414  385034 system_pods.go:61] "etcd-newest-cni-051083" [a0343ecd-b1ea-4a09-a05b-fba7a474213c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:12:49.270426  385034 system_pods.go:61] "kindnet-2k897" [30c67a93-f25e-435f-baf0-f939ba9859df] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 20:12:49.270436  385034 system_pods.go:61] "kube-apiserver-newest-cni-051083" [657a2192-282d-409f-8893-014d034cd42d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:12:49.270444  385034 system_pods.go:61] "kube-controller-manager-newest-cni-051083" [6f894a23-fc07-48df-b282-4e4335e3ca12] Running
	I1017 20:12:49.270461  385034 system_pods.go:61] "kube-proxy-bv8fn" [e5deab5b-135e-40d2-8a6b-ec83d4c4fce5] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:12:49.270472  385034 system_pods.go:61] "kube-scheduler-newest-cni-051083" [5ab00384-0333-49c7-a1ac-012b9d035066] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:12:49.270483  385034 system_pods.go:61] "storage-provisioner" [2699b8f0-5373-4f6e-8e29-f68953e6a741] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:12:49.270493  385034 system_pods.go:74] duration metric: took 3.061075ms to wait for pod list to return data ...
	I1017 20:12:49.270506  385034 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:12:49.275487  385034 default_sa.go:45] found service account: "default"
	I1017 20:12:49.275516  385034 default_sa.go:55] duration metric: took 5.001929ms for default service account to be created ...
	I1017 20:12:49.275531  385034 kubeadm.go:586] duration metric: took 535.045029ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 20:12:49.275557  385034 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:12:49.278245  385034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:12:49.278276  385034 node_conditions.go:123] node cpu capacity is 8
	I1017 20:12:49.278290  385034 node_conditions.go:105] duration metric: took 2.728511ms to run NodePressure ...
	I1017 20:12:49.278304  385034 start.go:241] waiting for startup goroutines ...
	I1017 20:12:49.558212  385034 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-051083" context rescaled to 1 replicas
	I1017 20:12:49.558252  385034 start.go:246] waiting for cluster config update ...
	I1017 20:12:49.558262  385034 start.go:255] writing updated cluster config ...
	I1017 20:12:49.558588  385034 ssh_runner.go:195] Run: rm -f paused
	I1017 20:12:49.609776  385034 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:12:49.612463  385034 out.go:179] * Done! kubectl is now configured to use "newest-cni-051083" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.733620954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.736687724Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c3195f64-7bd5-4b06-adb4-565f3cc8adf0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.737489425Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f47e6eb4-44dd-4550-81fb-4609265d8ff0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.738499884Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.739148648Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.739426857Z" level=info msg="Ran pod sandbox bf0247f26f3a1449393084e0da809ae0515db25091998af517f8e7c4e5985c88 with infra container: kube-system/kube-proxy-bv8fn/POD" id=c3195f64-7bd5-4b06-adb4-565f3cc8adf0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.739976309Z" level=info msg="Ran pod sandbox 3446256289e971019333b198eabd2b5e5a260ca5ae7103b378781718bd251a90 with infra container: kube-system/kindnet-2k897/POD" id=f47e6eb4-44dd-4550-81fb-4609265d8ff0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.740914751Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=05da86d6-0d3a-4ad9-8382-55b89a9a0424 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.740920405Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0a82031c-ca80-4a67-af35-6271032d8006 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.742664245Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ff306629-1909-4820-984d-7ec1f18cd53b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.744531614Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=843f3ff8-1bbd-4127-9ee6-b29ab92b58fa name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.748246131Z" level=info msg="Creating container: kube-system/kindnet-2k897/kindnet-cni" id=fe8d454c-c392-4e9a-9da4-d0d46f32fd48 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.748710933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.749423456Z" level=info msg="Creating container: kube-system/kube-proxy-bv8fn/kube-proxy" id=16dbcb59-5c7a-4ba4-bffa-817da283cdbf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.749912104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.755065057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.756729175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.762453649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.763225062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.937955152Z" level=info msg="Created container 7916792986c6c0ded2cce827368b423e9fa74409f0aaa302fadc9cf9e3880c4d: kube-system/kindnet-2k897/kindnet-cni" id=fe8d454c-c392-4e9a-9da4-d0d46f32fd48 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.938850162Z" level=info msg="Starting container: 7916792986c6c0ded2cce827368b423e9fa74409f0aaa302fadc9cf9e3880c4d" id=a4e1ccdd-7273-463e-82d5-d43e7d29d5ea name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.941980104Z" level=info msg="Created container 78de950b6b2a2d0b716e00cb34f8fb0a347af384001f06a8dde773e41c89f9a3: kube-system/kube-proxy-bv8fn/kube-proxy" id=16dbcb59-5c7a-4ba4-bffa-817da283cdbf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.942216425Z" level=info msg="Started container" PID=1534 containerID=7916792986c6c0ded2cce827368b423e9fa74409f0aaa302fadc9cf9e3880c4d description=kube-system/kindnet-2k897/kindnet-cni id=a4e1ccdd-7273-463e-82d5-d43e7d29d5ea name=/runtime.v1.RuntimeService/StartContainer sandboxID=3446256289e971019333b198eabd2b5e5a260ca5ae7103b378781718bd251a90
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.943499684Z" level=info msg="Starting container: 78de950b6b2a2d0b716e00cb34f8fb0a347af384001f06a8dde773e41c89f9a3" id=d60af941-4420-4d54-a9e4-9d37a4beb2ba name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:12:48 newest-cni-051083 crio[779]: time="2025-10-17T20:12:48.948166874Z" level=info msg="Started container" PID=1535 containerID=78de950b6b2a2d0b716e00cb34f8fb0a347af384001f06a8dde773e41c89f9a3 description=kube-system/kube-proxy-bv8fn/kube-proxy id=d60af941-4420-4d54-a9e4-9d37a4beb2ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf0247f26f3a1449393084e0da809ae0515db25091998af517f8e7c4e5985c88
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	78de950b6b2a2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   bf0247f26f3a1       kube-proxy-bv8fn                            kube-system
	7916792986c6c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   3446256289e97       kindnet-2k897                               kube-system
	a60583bc6aeef       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   35fac9a8b12c6       kube-apiserver-newest-cni-051083            kube-system
	98dfe0a831dc5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   3847cb92b21e8       kube-scheduler-newest-cni-051083            kube-system
	0d3aec859456e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   ff3302a672ea5       etcd-newest-cni-051083                      kube-system
	5e4b8d654bb25       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   789c5b3215262       kube-controller-manager-newest-cni-051083   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-051083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-051083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=newest-cni-051083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_12_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:12:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-051083
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:12:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:12:43 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:12:43 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:12:43 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 20:12:43 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-051083
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                f6bb8511-2049-4150-aef2-f04e212d38cd
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-051083                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-2k897                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-051083             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-051083    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-bv8fn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-051083             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-051083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-051083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-051083 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-051083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-051083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node newest-cni-051083 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-051083 event: Registered Node newest-cni-051083 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [0d3aec859456ef9ef99d65ba81e3529c90f752f696475959f755b0e8ecb2a379] <==
	{"level":"warn","ts":"2025-10-17T20:12:39.454337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.460925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.478283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.485080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.491726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.501771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.509720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.526052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.534417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.541782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.549980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.558370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.571031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.576115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.583557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.590652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.613230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.621091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.628613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.644229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.652094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.663978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.668553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.683852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:39.737339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57704","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:12:51 up  1:55,  0 user,  load average: 8.83, 5.14, 2.98
	Linux newest-cni-051083 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7916792986c6c0ded2cce827368b423e9fa74409f0aaa302fadc9cf9e3880c4d] <==
	I1017 20:12:49.217553       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:12:49.217917       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 20:12:49.218079       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:12:49.218100       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:12:49.218122       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:12:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:12:49.419387       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:12:49.419917       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:12:49.419947       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:12:49.420185       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:12:49.720391       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:12:49.720423       1 metrics.go:72] Registering metrics
	I1017 20:12:49.720480       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [a60583bc6aeef6395baada460d4f573c0ddaab06f1e23e7e990ddf0822a1dbc8] <==
	E1017 20:12:40.406583       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1017 20:12:40.423575       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1017 20:12:40.454088       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:12:40.462159       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:12:40.462318       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 20:12:40.469355       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:12:40.470989       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:12:40.627835       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:12:41.257506       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:12:41.261657       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:12:41.261680       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:12:41.839701       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:12:41.883850       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:12:41.962754       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:12:41.971793       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1017 20:12:41.973226       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:12:41.978325       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:12:42.306093       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:12:43.080332       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:12:43.103967       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:12:43.124328       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 20:12:48.058113       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:12:48.110485       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:12:48.115755       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:12:48.407842       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5e4b8d654bb25570b8b3d0cb9910cec7f0c9447cb9f380f16abab85c55fc77d2] <==
	I1017 20:12:47.304244       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:12:47.306222       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:12:47.306275       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:12:47.306389       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:12:47.306396       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:12:47.306561       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:12:47.306606       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:12:47.306981       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:12:47.307001       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 20:12:47.306985       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:12:47.307132       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:12:47.307134       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:12:47.307377       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:12:47.307839       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:12:47.309135       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:12:47.309144       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 20:12:47.309152       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:12:47.310778       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:12:47.314559       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:12:47.323099       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:12:47.329286       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 20:12:47.329461       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:12:47.329562       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-051083"
	I1017 20:12:47.329621       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 20:12:47.344371       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [78de950b6b2a2d0b716e00cb34f8fb0a347af384001f06a8dde773e41c89f9a3] <==
	I1017 20:12:48.999970       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:12:49.060051       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:12:49.160820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:12:49.160884       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 20:12:49.161000       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:12:49.183181       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:12:49.183238       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:12:49.189029       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:12:49.189539       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:12:49.189565       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:12:49.193126       1 config.go:200] "Starting service config controller"
	I1017 20:12:49.193151       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:12:49.193158       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:12:49.193160       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:12:49.193155       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:12:49.193209       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:12:49.193312       1 config.go:309] "Starting node config controller"
	I1017 20:12:49.193322       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:12:49.193329       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:12:49.293269       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:12:49.293336       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 20:12:49.293347       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [98dfe0a831dc500d6cd7bdf8b697ad38a815622b59b1a83e1e19f841073ee900] <==
	E1017 20:12:40.490097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:12:40.490186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:12:40.490890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:12:40.490970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:12:40.491026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:12:40.491177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:12:40.491257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:12:40.491602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:12:40.491618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:12:40.491368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:12:40.491653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:12:40.491598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:12:40.491379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:12:40.491367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:12:40.491543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:12:41.296357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:12:41.344087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:12:41.344087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:12:41.413180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:12:41.440506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:12:41.490783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:12:41.524153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:12:41.551319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:12:41.571396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1017 20:12:41.989759       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:12:43 newest-cni-051083 kubelet[1331]: I1017 20:12:43.098551    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9803991c23cce552986730013f169f7-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-051083\" (UID: \"f9803991c23cce552986730013f169f7\") " pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:12:43 newest-cni-051083 kubelet[1331]: I1017 20:12:43.099917    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9803991c23cce552986730013f169f7-usr-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-051083\" (UID: \"f9803991c23cce552986730013f169f7\") " pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:12:43 newest-cni-051083 kubelet[1331]: I1017 20:12:43.099974    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/63797a6c6f101be6eafdf7ed8b9cd215-etcd-data\") pod \"etcd-newest-cni-051083\" (UID: \"63797a6c6f101be6eafdf7ed8b9cd215\") " pod="kube-system/etcd-newest-cni-051083"
	Oct 17 20:12:43 newest-cni-051083 kubelet[1331]: I1017 20:12:43.100008    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f9803991c23cce552986730013f169f7-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-051083\" (UID: \"f9803991c23cce552986730013f169f7\") " pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:12:43 newest-cni-051083 kubelet[1331]: I1017 20:12:43.100049    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9803991c23cce552986730013f169f7-kubeconfig\") pod \"kube-controller-manager-newest-cni-051083\" (UID: \"f9803991c23cce552986730013f169f7\") " pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:12:43 newest-cni-051083 kubelet[1331]: I1017 20:12:43.889090    1331 apiserver.go:52] "Watching apiserver"
	Oct 17 20:12:43 newest-cni-051083 kubelet[1331]: I1017 20:12:43.896497    1331 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 17 20:12:43 newest-cni-051083 kubelet[1331]: I1017 20:12:43.929718    1331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-051083"
	Oct 17 20:12:43 newest-cni-051083 kubelet[1331]: E1017 20:12:43.952393    1331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-051083\" already exists" pod="kube-system/etcd-newest-cni-051083"
	Oct 17 20:12:43 newest-cni-051083 kubelet[1331]: I1017 20:12:43.980472    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-051083" podStartSLOduration=0.980450247 podStartE2EDuration="980.450247ms" podCreationTimestamp="2025-10-17 20:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:43.980106695 +0000 UTC m=+1.156185139" watchObservedRunningTime="2025-10-17 20:12:43.980450247 +0000 UTC m=+1.156528693"
	Oct 17 20:12:43 newest-cni-051083 kubelet[1331]: I1017 20:12:43.981823    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-051083" podStartSLOduration=2.981801169 podStartE2EDuration="2.981801169s" podCreationTimestamp="2025-10-17 20:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:43.954153968 +0000 UTC m=+1.130232412" watchObservedRunningTime="2025-10-17 20:12:43.981801169 +0000 UTC m=+1.157879613"
	Oct 17 20:12:44 newest-cni-051083 kubelet[1331]: I1017 20:12:44.012984    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-051083" podStartSLOduration=1.012964429 podStartE2EDuration="1.012964429s" podCreationTimestamp="2025-10-17 20:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:44.00026571 +0000 UTC m=+1.176344180" watchObservedRunningTime="2025-10-17 20:12:44.012964429 +0000 UTC m=+1.189042873"
	Oct 17 20:12:44 newest-cni-051083 kubelet[1331]: I1017 20:12:44.027932    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-051083" podStartSLOduration=1.027908141 podStartE2EDuration="1.027908141s" podCreationTimestamp="2025-10-17 20:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:44.013116998 +0000 UTC m=+1.189195444" watchObservedRunningTime="2025-10-17 20:12:44.027908141 +0000 UTC m=+1.203986585"
	Oct 17 20:12:47 newest-cni-051083 kubelet[1331]: I1017 20:12:47.351367    1331 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 20:12:47 newest-cni-051083 kubelet[1331]: I1017 20:12:47.352092    1331 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 20:12:48 newest-cni-051083 kubelet[1331]: I1017 20:12:48.537473    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e5deab5b-135e-40d2-8a6b-ec83d4c4fce5-kube-proxy\") pod \"kube-proxy-bv8fn\" (UID: \"e5deab5b-135e-40d2-8a6b-ec83d4c4fce5\") " pod="kube-system/kube-proxy-bv8fn"
	Oct 17 20:12:48 newest-cni-051083 kubelet[1331]: I1017 20:12:48.537676    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30c67a93-f25e-435f-baf0-f939ba9859df-lib-modules\") pod \"kindnet-2k897\" (UID: \"30c67a93-f25e-435f-baf0-f939ba9859df\") " pod="kube-system/kindnet-2k897"
	Oct 17 20:12:48 newest-cni-051083 kubelet[1331]: I1017 20:12:48.537722    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30c67a93-f25e-435f-baf0-f939ba9859df-xtables-lock\") pod \"kindnet-2k897\" (UID: \"30c67a93-f25e-435f-baf0-f939ba9859df\") " pod="kube-system/kindnet-2k897"
	Oct 17 20:12:48 newest-cni-051083 kubelet[1331]: I1017 20:12:48.537769    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g54p\" (UniqueName: \"kubernetes.io/projected/30c67a93-f25e-435f-baf0-f939ba9859df-kube-api-access-6g54p\") pod \"kindnet-2k897\" (UID: \"30c67a93-f25e-435f-baf0-f939ba9859df\") " pod="kube-system/kindnet-2k897"
	Oct 17 20:12:48 newest-cni-051083 kubelet[1331]: I1017 20:12:48.537808    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjrxj\" (UniqueName: \"kubernetes.io/projected/e5deab5b-135e-40d2-8a6b-ec83d4c4fce5-kube-api-access-kjrxj\") pod \"kube-proxy-bv8fn\" (UID: \"e5deab5b-135e-40d2-8a6b-ec83d4c4fce5\") " pod="kube-system/kube-proxy-bv8fn"
	Oct 17 20:12:48 newest-cni-051083 kubelet[1331]: I1017 20:12:48.537834    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/30c67a93-f25e-435f-baf0-f939ba9859df-cni-cfg\") pod \"kindnet-2k897\" (UID: \"30c67a93-f25e-435f-baf0-f939ba9859df\") " pod="kube-system/kindnet-2k897"
	Oct 17 20:12:48 newest-cni-051083 kubelet[1331]: I1017 20:12:48.537860    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5deab5b-135e-40d2-8a6b-ec83d4c4fce5-xtables-lock\") pod \"kube-proxy-bv8fn\" (UID: \"e5deab5b-135e-40d2-8a6b-ec83d4c4fce5\") " pod="kube-system/kube-proxy-bv8fn"
	Oct 17 20:12:48 newest-cni-051083 kubelet[1331]: I1017 20:12:48.537883    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5deab5b-135e-40d2-8a6b-ec83d4c4fce5-lib-modules\") pod \"kube-proxy-bv8fn\" (UID: \"e5deab5b-135e-40d2-8a6b-ec83d4c4fce5\") " pod="kube-system/kube-proxy-bv8fn"
	Oct 17 20:12:49 newest-cni-051083 kubelet[1331]: I1017 20:12:49.966297    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2k897" podStartSLOduration=1.966266337 podStartE2EDuration="1.966266337s" podCreationTimestamp="2025-10-17 20:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:49.966119217 +0000 UTC m=+7.142197698" watchObservedRunningTime="2025-10-17 20:12:49.966266337 +0000 UTC m=+7.142344781"
	Oct 17 20:12:49 newest-cni-051083 kubelet[1331]: I1017 20:12:49.997455    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bv8fn" podStartSLOduration=1.9974263319999999 podStartE2EDuration="1.997426332s" podCreationTimestamp="2025-10-17 20:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:49.997089295 +0000 UTC m=+7.173167738" watchObservedRunningTime="2025-10-17 20:12:49.997426332 +0000 UTC m=+7.173504776"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-051083 -n newest-cni-051083
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-051083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-26q6r storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-051083 describe pod coredns-66bc5c9577-26q6r storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-051083 describe pod coredns-66bc5c9577-26q6r storage-provisioner: exit status 1 (66.584289ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-26q6r" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-051083 describe pod coredns-66bc5c9577-26q6r storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-051083 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-051083 --alsologtostderr -v=1: exit status 80 (1.686833066s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-051083 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:13:05.625041  396930 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:13:05.625294  396930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:05.625302  396930 out.go:374] Setting ErrFile to fd 2...
	I1017 20:13:05.625307  396930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:05.625497  396930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:13:05.625733  396930 out.go:368] Setting JSON to false
	I1017 20:13:05.625782  396930 mustload.go:65] Loading cluster: newest-cni-051083
	I1017 20:13:05.626158  396930 config.go:182] Loaded profile config "newest-cni-051083": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:05.627268  396930 cli_runner.go:164] Run: docker container inspect newest-cni-051083 --format={{.State.Status}}
	I1017 20:13:05.645609  396930 host.go:66] Checking if "newest-cni-051083" exists ...
	I1017 20:13:05.645987  396930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:13:05.707716  396930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-17 20:13:05.696129397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:13:05.708457  396930 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-051083 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:13:05.710441  396930 out.go:179] * Pausing node newest-cni-051083 ... 
	I1017 20:13:05.711595  396930 host.go:66] Checking if "newest-cni-051083" exists ...
	I1017 20:13:05.711899  396930 ssh_runner.go:195] Run: systemctl --version
	I1017 20:13:05.711940  396930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-051083
	I1017 20:13:05.731891  396930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/newest-cni-051083/id_rsa Username:docker}
	I1017 20:13:05.827691  396930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:13:05.840334  396930 pause.go:52] kubelet running: true
	I1017 20:13:05.840400  396930 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:13:05.976580  396930 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:13:05.976680  396930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:13:06.048168  396930 cri.go:89] found id: "ed99414bac6b9ea79f328b7ccf57871536ced1a403fe8460b0da75d47e736716"
	I1017 20:13:06.048196  396930 cri.go:89] found id: "73273a320d039563a683bba50e23eb61ca48cf1f1c34584dd5e722a6cfb37dfd"
	I1017 20:13:06.048201  396930 cri.go:89] found id: "cc42dfd84be4f5af1ec837f817b2596783c4cf948909c641a75e07dfb52e9d71"
	I1017 20:13:06.048205  396930 cri.go:89] found id: "2931c4d6f33f556407ac0a8d56dd07ee89f89feffa910248e7bebee0bbe9f80d"
	I1017 20:13:06.048209  396930 cri.go:89] found id: "932b7d1eb64f55b7e3fb460e5b9d3ffa1644b7ab3e1b81d603893cd983f9ba2b"
	I1017 20:13:06.048215  396930 cri.go:89] found id: "b96c4e8ab4485ef16cba36dad44b2b04cf5d5e7a68f7e5de57f6c0d891d205c6"
	I1017 20:13:06.048219  396930 cri.go:89] found id: ""
	I1017 20:13:06.048271  396930 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:13:06.061408  396930 retry.go:31] will retry after 260.811603ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:13:06Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:13:06.322815  396930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:13:06.336214  396930 pause.go:52] kubelet running: false
	I1017 20:13:06.336313  396930 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:13:06.450088  396930 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:13:06.450184  396930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:13:06.519615  396930 cri.go:89] found id: "ed99414bac6b9ea79f328b7ccf57871536ced1a403fe8460b0da75d47e736716"
	I1017 20:13:06.519640  396930 cri.go:89] found id: "73273a320d039563a683bba50e23eb61ca48cf1f1c34584dd5e722a6cfb37dfd"
	I1017 20:13:06.519651  396930 cri.go:89] found id: "cc42dfd84be4f5af1ec837f817b2596783c4cf948909c641a75e07dfb52e9d71"
	I1017 20:13:06.519656  396930 cri.go:89] found id: "2931c4d6f33f556407ac0a8d56dd07ee89f89feffa910248e7bebee0bbe9f80d"
	I1017 20:13:06.519661  396930 cri.go:89] found id: "932b7d1eb64f55b7e3fb460e5b9d3ffa1644b7ab3e1b81d603893cd983f9ba2b"
	I1017 20:13:06.519665  396930 cri.go:89] found id: "b96c4e8ab4485ef16cba36dad44b2b04cf5d5e7a68f7e5de57f6c0d891d205c6"
	I1017 20:13:06.519670  396930 cri.go:89] found id: ""
	I1017 20:13:06.519719  396930 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:13:06.532399  396930 retry.go:31] will retry after 510.139605ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:13:06Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:13:07.042915  396930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:13:07.056396  396930 pause.go:52] kubelet running: false
	I1017 20:13:07.056479  396930 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:13:07.169771  396930 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:13:07.169859  396930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:13:07.239279  396930 cri.go:89] found id: "ed99414bac6b9ea79f328b7ccf57871536ced1a403fe8460b0da75d47e736716"
	I1017 20:13:07.239312  396930 cri.go:89] found id: "73273a320d039563a683bba50e23eb61ca48cf1f1c34584dd5e722a6cfb37dfd"
	I1017 20:13:07.239318  396930 cri.go:89] found id: "cc42dfd84be4f5af1ec837f817b2596783c4cf948909c641a75e07dfb52e9d71"
	I1017 20:13:07.239323  396930 cri.go:89] found id: "2931c4d6f33f556407ac0a8d56dd07ee89f89feffa910248e7bebee0bbe9f80d"
	I1017 20:13:07.239326  396930 cri.go:89] found id: "932b7d1eb64f55b7e3fb460e5b9d3ffa1644b7ab3e1b81d603893cd983f9ba2b"
	I1017 20:13:07.239330  396930 cri.go:89] found id: "b96c4e8ab4485ef16cba36dad44b2b04cf5d5e7a68f7e5de57f6c0d891d205c6"
	I1017 20:13:07.239332  396930 cri.go:89] found id: ""
	I1017 20:13:07.239372  396930 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:13:07.253651  396930 out.go:203] 
	W1017 20:13:07.255318  396930 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:13:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:13:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:13:07.255339  396930 out.go:285] * 
	* 
	W1017 20:13:07.259603  396930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:13:07.260921  396930 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-051083 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-051083
helpers_test.go:243: (dbg) docker inspect newest-cni-051083:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3",
	        "Created": "2025-10-17T20:12:27.257340799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393700,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:12:54.788721396Z",
	            "FinishedAt": "2025-10-17T20:12:53.932358908Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/hosts",
	        "LogPath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3-json.log",
	        "Name": "/newest-cni-051083",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-051083:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-051083",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3",
	                "LowerDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-051083",
	                "Source": "/var/lib/docker/volumes/newest-cni-051083/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-051083",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-051083",
	                "name.minikube.sigs.k8s.io": "newest-cni-051083",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5103502b768d80e9bf599f9c08fbd8fdbacea2e5bbfe20d75de2b1eb91bfd990",
	            "SandboxKey": "/var/run/docker/netns/5103502b768d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33208"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33207"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-051083": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:b9:54:64:2a:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "42b465f0ccbda0e5ca1971c81bb13558e21d93dd3bfe9fc99a5609898791da62",
	                    "EndpointID": "af86526f9c306eee9738159ba4b593dbcc7c4673c6c9ec67ef29cf34f9df6645",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-051083",
	                        "46e8db0f52af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051083 -n newest-cni-051083
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051083 -n newest-cni-051083: exit status 2 (333.941805ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-051083 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-051083 logs -n 25: (1.090584711s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ image   │ old-k8s-version-726816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ pause   │ -p old-k8s-version-726816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p cert-expiration-202048 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-202048       │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ image   │ no-preload-449580 image list --format=json                                                                                                                                                                                                    │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ pause   │ -p no-preload-449580 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ delete  │ -p cert-expiration-202048                                                                                                                                                                                                                     │ cert-expiration-202048       │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p disable-driver-mounts-270495                                                                                                                                                                                                               │ disable-driver-mounts-270495 │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p no-preload-449580                                                                                                                                                                                                                          │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p no-preload-449580                                                                                                                                                                                                                          │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ addons  │ enable metrics-server -p embed-certs-051488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ stop    │ -p embed-certs-051488 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:13 UTC │
	│ addons  │ enable metrics-server -p newest-cni-051083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ stop    │ -p newest-cni-051083 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-051083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:13 UTC │
	│ addons  │ enable dashboard -p embed-certs-051488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ image   │ newest-cni-051083 image list --format=json                                                                                                                                                                                                    │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ pause   │ -p newest-cni-051083 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:13:04
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:13:04.057932  395845 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:13:04.058195  395845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:04.058205  395845 out.go:374] Setting ErrFile to fd 2...
	I1017 20:13:04.058210  395845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:04.058436  395845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:13:04.058990  395845 out.go:368] Setting JSON to false
	I1017 20:13:04.060386  395845 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6932,"bootTime":1760725052,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:13:04.060483  395845 start.go:141] virtualization: kvm guest
	I1017 20:13:04.062422  395845 out.go:179] * [embed-certs-051488] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:13:04.063786  395845 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:13:04.063804  395845 notify.go:220] Checking for updates...
	I1017 20:13:04.066679  395845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:13:04.067970  395845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:13:04.072949  395845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:13:04.074279  395845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:13:04.075611  395845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:13:04.077287  395845 config.go:182] Loaded profile config "embed-certs-051488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:04.077805  395845 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:13:04.101800  395845 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:13:04.101908  395845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:13:04.164788  395845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:13:04.155127043 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:13:04.164904  395845 docker.go:318] overlay module found
	I1017 20:13:04.166781  395845 out.go:179] * Using the docker driver based on existing profile
	I1017 20:13:04.168070  395845 start.go:305] selected driver: docker
	I1017 20:13:04.168091  395845 start.go:925] validating driver "docker" against &{Name:embed-certs-051488 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-051488 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:13:04.168202  395845 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:13:04.168883  395845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:13:04.232575  395845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:13:04.219903315 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:13:04.233013  395845 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:13:04.233046  395845 cni.go:84] Creating CNI manager for ""
	I1017 20:13:04.233115  395845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:13:04.233169  395845 start.go:349] cluster config:
	{Name:embed-certs-051488 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-051488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:13:04.234877  395845 out.go:179] * Starting "embed-certs-051488" primary control-plane node in "embed-certs-051488" cluster
	I1017 20:13:04.236012  395845 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:13:04.237238  395845 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:13:04.238458  395845 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:13:04.238500  395845 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:13:04.238516  395845 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:13:04.238529  395845 cache.go:58] Caching tarball of preloaded images
	I1017 20:13:04.238650  395845 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:13:04.238663  395845 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:13:04.238823  395845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/config.json ...
	I1017 20:13:04.263599  395845 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:13:04.263633  395845 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:13:04.263655  395845 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:13:04.263684  395845 start.go:360] acquireMachinesLock for embed-certs-051488: {Name:mk6afa1aece12c87fd06ad5337662430a71ab0ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:13:04.263762  395845 start.go:364] duration metric: took 58.169µs to acquireMachinesLock for "embed-certs-051488"
	I1017 20:13:04.263787  395845 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:13:04.263796  395845 fix.go:54] fixHost starting: 
	I1017 20:13:04.264133  395845 cli_runner.go:164] Run: docker container inspect embed-certs-051488 --format={{.State.Status}}
	I1017 20:13:04.288330  395845 fix.go:112] recreateIfNeeded on embed-certs-051488: state=Stopped err=<nil>
	W1017 20:13:04.288382  395845 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:13:04.050559  393424 addons.go:514] duration metric: took 2.318814614s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 20:13:04.428370  393424 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:13:04.436198  393424 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:13:04.436227  393424 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:13:04.928919  393424 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:13:04.934361  393424 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 20:13:04.935619  393424 api_server.go:141] control plane version: v1.34.1
	I1017 20:13:04.935639  393424 api_server.go:131] duration metric: took 3.007435204s to wait for apiserver health ...
	I1017 20:13:04.935650  393424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:13:04.940810  393424 system_pods.go:59] 8 kube-system pods found
	I1017 20:13:04.940855  393424 system_pods.go:61] "coredns-66bc5c9577-26q6r" [9f41e0e1-0ec5-4641-89b1-0c3489fd8ded] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:13:04.940864  393424 system_pods.go:61] "etcd-newest-cni-051083" [a0343ecd-b1ea-4a09-a05b-fba7a474213c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:13:04.940872  393424 system_pods.go:61] "kindnet-2k897" [30c67a93-f25e-435f-baf0-f939ba9859df] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 20:13:04.940878  393424 system_pods.go:61] "kube-apiserver-newest-cni-051083" [657a2192-282d-409f-8893-014d034cd42d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:13:04.940884  393424 system_pods.go:61] "kube-controller-manager-newest-cni-051083" [6f894a23-fc07-48df-b282-4e4335e3ca12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:13:04.940893  393424 system_pods.go:61] "kube-proxy-bv8fn" [e5deab5b-135e-40d2-8a6b-ec83d4c4fce5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:13:04.940899  393424 system_pods.go:61] "kube-scheduler-newest-cni-051083" [5ab00384-0333-49c7-a1ac-012b9d035066] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:13:04.940904  393424 system_pods.go:61] "storage-provisioner" [2699b8f0-5373-4f6e-8e29-f68953e6a741] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:13:04.940911  393424 system_pods.go:74] duration metric: took 5.25518ms to wait for pod list to return data ...
	I1017 20:13:04.940919  393424 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:13:04.943909  393424 default_sa.go:45] found service account: "default"
	I1017 20:13:04.943932  393424 default_sa.go:55] duration metric: took 3.007687ms for default service account to be created ...
	I1017 20:13:04.943944  393424 kubeadm.go:586] duration metric: took 3.212237584s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 20:13:04.943961  393424 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:13:04.946902  393424 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:13:04.946933  393424 node_conditions.go:123] node cpu capacity is 8
	I1017 20:13:04.946949  393424 node_conditions.go:105] duration metric: took 2.983443ms to run NodePressure ...
	I1017 20:13:04.946975  393424 start.go:241] waiting for startup goroutines ...
	I1017 20:13:04.946990  393424 start.go:246] waiting for cluster config update ...
	I1017 20:13:04.947004  393424 start.go:255] writing updated cluster config ...
	I1017 20:13:04.947315  393424 ssh_runner.go:195] Run: rm -f paused
	I1017 20:13:05.011023  393424 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:13:05.014969  393424 out.go:179] * Done! kubectl is now configured to use "newest-cni-051083" cluster and "default" namespace by default
	I1017 20:13:04.211879  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:13:04.212399  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:13:04.212470  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:13:04.212531  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:13:04.242645  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:13:04.242667  344862 cri.go:89] found id: ""
	I1017 20:13:04.242676  344862 logs.go:282] 1 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5]
	I1017 20:13:04.242774  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:04.247795  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:13:04.247873  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:13:04.285562  344862 cri.go:89] found id: ""
	I1017 20:13:04.285595  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.285606  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:13:04.285618  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:13:04.285676  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:13:04.322350  344862 cri.go:89] found id: ""
	I1017 20:13:04.322380  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.322392  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:13:04.322399  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:13:04.322462  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:13:04.358185  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:13:04.358206  344862 cri.go:89] found id: ""
	I1017 20:13:04.358214  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:13:04.358261  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:04.362681  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:13:04.362775  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:13:04.399527  344862 cri.go:89] found id: ""
	I1017 20:13:04.399558  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.399569  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:13:04.399577  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:13:04.399645  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:13:04.443818  344862 cri.go:89] found id: "a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:13:04.443845  344862 cri.go:89] found id: ""
	I1017 20:13:04.443856  344862 logs.go:282] 1 containers: [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54]
	I1017 20:13:04.443919  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:04.448756  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:13:04.448820  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:13:04.483018  344862 cri.go:89] found id: ""
	I1017 20:13:04.483052  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.483064  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:13:04.483072  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:13:04.483131  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:13:04.516610  344862 cri.go:89] found id: ""
	I1017 20:13:04.516643  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.516654  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:13:04.516666  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:13:04.516683  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:13:04.562489  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:13:04.562538  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:13:04.641377  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:13:04.641410  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:13:04.682504  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:13:04.682546  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:13:04.745368  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:13:04.745424  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:13:04.788178  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:13:04.788222  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:13:04.894186  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:13:04.894222  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:13:04.918292  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:13:04.918396  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:13:04.995522  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:13:07.496810  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:13:07.497321  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:13:07.497387  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:13:07.497487  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:13:07.528629  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:13:07.528659  344862 cri.go:89] found id: ""
	I1017 20:13:07.528670  344862 logs.go:282] 1 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5]
	I1017 20:13:07.528759  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:07.534258  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:13:07.534355  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:13:07.570129  344862 cri.go:89] found id: ""
	I1017 20:13:07.570152  344862 logs.go:282] 0 containers: []
	W1017 20:13:07.570160  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:13:07.570165  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:13:07.570210  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:13:07.599973  344862 cri.go:89] found id: ""
	I1017 20:13:07.600000  344862 logs.go:282] 0 containers: []
	W1017 20:13:07.600011  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:13:07.600019  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:13:07.600069  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	
	
	==> CRI-O <==
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.258667944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.262636016Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=916f1059-2d31-43d2-9116-1f10f109906a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.264526113Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1ea0aba1-33cd-4a0a-8b70-fa522ea03a74 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.265897616Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.266679343Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.266892785Z" level=info msg="Ran pod sandbox f3cf8740fdcdff033b2d13af8f653e59f4102699089dacb04dfeb0f4ef6cc9e9 with infra container: kube-system/kindnet-2k897/POD" id=916f1059-2d31-43d2-9116-1f10f109906a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.267613718Z" level=info msg="Ran pod sandbox 9fa326afa63b094f584232687e425763391f1adfd889bfbff2f3792ed5b56ab4 with infra container: kube-system/kube-proxy-bv8fn/POD" id=1ea0aba1-33cd-4a0a-8b70-fa522ea03a74 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.268845139Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=24c777f0-5da2-4b5f-a456-0ea274a6bdc2 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.26949417Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b878f69f-b3ee-40c8-981a-338059eaff53 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.269906964Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=05bf6d6a-23ec-476a-a599-6ee72c4ddba1 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.270607588Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=57ad1a5e-4b22-40d4-9f3f-245a342c7e35 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.271085118Z" level=info msg="Creating container: kube-system/kindnet-2k897/kindnet-cni" id=5d151bf2-5661-4795-8011-c542e916bdd4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.27143884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.275226119Z" level=info msg="Creating container: kube-system/kube-proxy-bv8fn/kube-proxy" id=68c03621-7196-44b8-8236-5c84016c7db4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.275687978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.275798179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.277071468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.281603995Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.282406381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.3052276Z" level=info msg="Created container 73273a320d039563a683bba50e23eb61ca48cf1f1c34584dd5e722a6cfb37dfd: kube-system/kindnet-2k897/kindnet-cni" id=5d151bf2-5661-4795-8011-c542e916bdd4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.306076424Z" level=info msg="Starting container: 73273a320d039563a683bba50e23eb61ca48cf1f1c34584dd5e722a6cfb37dfd" id=699950b2-9b2b-48b7-859b-bd7f49d04ae5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.30846758Z" level=info msg="Started container" PID=1041 containerID=73273a320d039563a683bba50e23eb61ca48cf1f1c34584dd5e722a6cfb37dfd description=kube-system/kindnet-2k897/kindnet-cni id=699950b2-9b2b-48b7-859b-bd7f49d04ae5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3cf8740fdcdff033b2d13af8f653e59f4102699089dacb04dfeb0f4ef6cc9e9
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.310595242Z" level=info msg="Created container ed99414bac6b9ea79f328b7ccf57871536ced1a403fe8460b0da75d47e736716: kube-system/kube-proxy-bv8fn/kube-proxy" id=68c03621-7196-44b8-8236-5c84016c7db4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.311631006Z" level=info msg="Starting container: ed99414bac6b9ea79f328b7ccf57871536ced1a403fe8460b0da75d47e736716" id=66a2b69e-0c31-4d54-8286-59daa84874f7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.315206561Z" level=info msg="Started container" PID=1042 containerID=ed99414bac6b9ea79f328b7ccf57871536ced1a403fe8460b0da75d47e736716 description=kube-system/kube-proxy-bv8fn/kube-proxy id=66a2b69e-0c31-4d54-8286-59daa84874f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9fa326afa63b094f584232687e425763391f1adfd889bfbff2f3792ed5b56ab4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ed99414bac6b9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   9fa326afa63b0       kube-proxy-bv8fn                            kube-system
	73273a320d039       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   f3cf8740fdcdf       kindnet-2k897                               kube-system
	cc42dfd84be4f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 seconds ago       Running             etcd                      1                   41bfc77de3dee       etcd-newest-cni-051083                      kube-system
	2931c4d6f33f5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 seconds ago       Running             kube-scheduler            1                   10d0b7e46743f       kube-scheduler-newest-cni-051083            kube-system
	932b7d1eb64f5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 seconds ago       Running             kube-apiserver            1                   7e2eb680324d9       kube-apiserver-newest-cni-051083            kube-system
	b96c4e8ab4485       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 seconds ago       Running             kube-controller-manager   1                   5ab79b9634362       kube-controller-manager-newest-cni-051083   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-051083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-051083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=newest-cni-051083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_12_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:12:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-051083
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:13:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:13:03 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:13:03 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:13:03 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 20:13:03 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-051083
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                f6bb8511-2049-4150-aef2-f04e212d38cd
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-051083                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25s
	  kube-system                 kindnet-2k897                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20s
	  kube-system                 kube-apiserver-newest-cni-051083             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-controller-manager-newest-cni-051083    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-bv8fn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-scheduler-newest-cni-051083             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s (x8 over 30s)  kubelet          Node newest-cni-051083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 30s)  kubelet          Node newest-cni-051083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x8 over 30s)  kubelet          Node newest-cni-051083 status is now: NodeHasSufficientPID
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    25s                kubelet          Node newest-cni-051083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s                kubelet          Node newest-cni-051083 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  25s                kubelet          Node newest-cni-051083 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           21s                node-controller  Node newest-cni-051083 event: Registered Node newest-cni-051083 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 8s)    kubelet          Node newest-cni-051083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 8s)    kubelet          Node newest-cni-051083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x8 over 8s)    kubelet          Node newest-cni-051083 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-051083 event: Registered Node newest-cni-051083 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [cc42dfd84be4f5af1ec837f817b2596783c4cf948909c641a75e07dfb52e9d71] <==
	{"level":"warn","ts":"2025-10-17T20:13:02.785479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.794083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.802692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.809811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.816512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.823312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.830224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.837332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.844036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.850644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.857911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.864248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.871562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.878329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.891388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.898106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.905323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.911639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.918503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.925775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.941174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.944903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.951211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.957947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:03.005805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33616","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:13:08 up  1:55,  0 user,  load average: 7.09, 4.93, 2.95
	Linux newest-cni-051083 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [73273a320d039563a683bba50e23eb61ca48cf1f1c34584dd5e722a6cfb37dfd] <==
	I1017 20:13:04.580685       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:13:04.580953       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 20:13:04.581088       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:13:04.581106       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:13:04.581134       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:13:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:13:04.785753       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:13:04.785785       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:13:04.785809       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:13:04.785950       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:13:05.086895       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:13:05.086929       1 metrics.go:72] Registering metrics
	I1017 20:13:05.086986       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [932b7d1eb64f55b7e3fb460e5b9d3ffa1644b7ab3e1b81d603893cd983f9ba2b] <==
	I1017 20:13:03.472387       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:13:03.472396       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:13:03.469464       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:13:03.470794       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:13:03.472895       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:13:03.470819       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:13:03.470877       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:13:03.478271       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1017 20:13:03.480731       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:13:03.484238       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 20:13:03.487358       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:13:03.492077       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:13:03.492107       1 policy_source.go:240] refreshing policies
	I1017 20:13:03.513240       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:13:03.805106       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:13:03.839982       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:13:03.870458       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:13:03.879469       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:13:03.888939       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:13:03.927599       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.58.128"}
	I1017 20:13:03.943219       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.126.117"}
	I1017 20:13:04.373845       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:13:06.808458       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:13:07.208466       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:13:07.307039       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b96c4e8ab4485ef16cba36dad44b2b04cf5d5e7a68f7e5de57f6c0d891d205c6] <==
	I1017 20:13:06.775299       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:13:06.779556       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 20:13:06.779681       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:13:06.779791       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-051083"
	I1017 20:13:06.779846       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 20:13:06.803939       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 20:13:06.803954       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:13:06.803973       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 20:13:06.804163       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:13:06.804687       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:13:06.806258       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:13:06.809761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:13:06.809925       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:13:06.809986       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:13:06.810019       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:13:06.810025       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:13:06.810031       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:13:06.810133       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:13:06.810156       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:13:06.811848       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:13:06.817822       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:13:06.819860       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:13:06.822124       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:13:06.822134       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:13:06.826946       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [ed99414bac6b9ea79f328b7ccf57871536ced1a403fe8460b0da75d47e736716] <==
	I1017 20:13:04.359930       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:13:04.423662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:13:04.524572       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:13:04.524611       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 20:13:04.524793       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:13:04.549752       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:13:04.549916       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:13:04.557664       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:13:04.558102       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:13:04.558132       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:04.560361       1 config.go:200] "Starting service config controller"
	I1017 20:13:04.560414       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:13:04.560454       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:13:04.560477       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:13:04.560532       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:13:04.560555       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:13:04.561156       1 config.go:309] "Starting node config controller"
	I1017 20:13:04.561568       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:13:04.561608       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:13:04.660657       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:13:04.660687       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 20:13:04.660661       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2931c4d6f33f556407ac0a8d56dd07ee89f89feffa910248e7bebee0bbe9f80d] <==
	I1017 20:13:02.452629       1 serving.go:386] Generated self-signed cert in-memory
	W1017 20:13:03.397200       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:13:03.397280       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:13:03.397295       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:13:03.397321       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:13:03.454947       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:13:03.454985       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:03.458328       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:03.458385       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:03.459446       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:13:03.459539       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 20:13:03.461952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1017 20:13:04.558924       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:13:02 newest-cni-051083 kubelet[668]: E1017 20:13:02.992299     668 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-051083\" not found" node="newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.451986     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.507878     668 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.508000     668 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.508037     668 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.508951     668 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.528930     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: E1017 20:13:03.536998     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-051083\" already exists" pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: E1017 20:13:03.572836     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-051083\" already exists" pod="kube-system/etcd-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.572878     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: E1017 20:13:03.581338     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-051083\" already exists" pod="kube-system/kube-apiserver-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.581389     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: E1017 20:13:03.587982     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-051083\" already exists" pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.588019     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: E1017 20:13:03.594942     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-051083\" already exists" pod="kube-system/kube-scheduler-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.949908     668 apiserver.go:52] "Watching apiserver"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.052426     668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.107545     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5deab5b-135e-40d2-8a6b-ec83d4c4fce5-xtables-lock\") pod \"kube-proxy-bv8fn\" (UID: \"e5deab5b-135e-40d2-8a6b-ec83d4c4fce5\") " pod="kube-system/kube-proxy-bv8fn"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.107600     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30c67a93-f25e-435f-baf0-f939ba9859df-xtables-lock\") pod \"kindnet-2k897\" (UID: \"30c67a93-f25e-435f-baf0-f939ba9859df\") " pod="kube-system/kindnet-2k897"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.107623     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5deab5b-135e-40d2-8a6b-ec83d4c4fce5-lib-modules\") pod \"kube-proxy-bv8fn\" (UID: \"e5deab5b-135e-40d2-8a6b-ec83d4c4fce5\") " pod="kube-system/kube-proxy-bv8fn"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.107731     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/30c67a93-f25e-435f-baf0-f939ba9859df-cni-cfg\") pod \"kindnet-2k897\" (UID: \"30c67a93-f25e-435f-baf0-f939ba9859df\") " pod="kube-system/kindnet-2k897"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.107915     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30c67a93-f25e-435f-baf0-f939ba9859df-lib-modules\") pod \"kindnet-2k897\" (UID: \"30c67a93-f25e-435f-baf0-f939ba9859df\") " pod="kube-system/kindnet-2k897"
	Oct 17 20:13:05 newest-cni-051083 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:13:05 newest-cni-051083 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:13:05 newest-cni-051083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-051083 -n newest-cni-051083
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-051083 -n newest-cni-051083: exit status 2 (382.440063ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-051083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-26q6r storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hpfnv kubernetes-dashboard-855c9754f9-nvzzl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-051083 describe pod coredns-66bc5c9577-26q6r storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hpfnv kubernetes-dashboard-855c9754f9-nvzzl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-051083 describe pod coredns-66bc5c9577-26q6r storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hpfnv kubernetes-dashboard-855c9754f9-nvzzl: exit status 1 (75.630354ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-26q6r" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-hpfnv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-nvzzl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-051083 describe pod coredns-66bc5c9577-26q6r storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hpfnv kubernetes-dashboard-855c9754f9-nvzzl: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-051083
helpers_test.go:243: (dbg) docker inspect newest-cni-051083:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3",
	        "Created": "2025-10-17T20:12:27.257340799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393700,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:12:54.788721396Z",
	            "FinishedAt": "2025-10-17T20:12:53.932358908Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/hosts",
	        "LogPath": "/var/lib/docker/containers/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3/46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3-json.log",
	        "Name": "/newest-cni-051083",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-051083:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-051083",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "46e8db0f52af37925a4a374f9a59939850016cb87ca01ae9a85153a1b1d2a3d3",
	                "LowerDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/062a91ed6c5db49f3f5dcb31d62da98e5eff9b8268ab536ed44bdffd07c1cce6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-051083",
	                "Source": "/var/lib/docker/volumes/newest-cni-051083/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-051083",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-051083",
	                "name.minikube.sigs.k8s.io": "newest-cni-051083",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5103502b768d80e9bf599f9c08fbd8fdbacea2e5bbfe20d75de2b1eb91bfd990",
	            "SandboxKey": "/var/run/docker/netns/5103502b768d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33208"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33207"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-051083": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:b9:54:64:2a:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "42b465f0ccbda0e5ca1971c81bb13558e21d93dd3bfe9fc99a5609898791da62",
	                    "EndpointID": "af86526f9c306eee9738159ba4b593dbcc7c4673c6c9ec67ef29cf34f9df6645",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-051083",
	                        "46e8db0f52af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051083 -n newest-cni-051083
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051083 -n newest-cni-051083: exit status 2 (364.360422ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-051083 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-051083 logs -n 25: (1.127410509s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-726816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ pause   │ -p old-k8s-version-726816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p cert-expiration-202048 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-202048       │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ image   │ no-preload-449580 image list --format=json                                                                                                                                                                                                    │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ pause   │ -p no-preload-449580 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ delete  │ -p cert-expiration-202048                                                                                                                                                                                                                     │ cert-expiration-202048       │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p disable-driver-mounts-270495                                                                                                                                                                                                               │ disable-driver-mounts-270495 │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p no-preload-449580                                                                                                                                                                                                                          │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p no-preload-449580                                                                                                                                                                                                                          │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ addons  │ enable metrics-server -p embed-certs-051488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ stop    │ -p embed-certs-051488 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:13 UTC │
	│ addons  │ enable metrics-server -p newest-cni-051083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ stop    │ -p newest-cni-051083 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-051083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:13 UTC │
	│ addons  │ enable dashboard -p embed-certs-051488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ image   │ newest-cni-051083 image list --format=json                                                                                                                                                                                                    │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ pause   │ -p newest-cni-051083 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-563805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:13:04
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:13:04.057932  395845 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:13:04.058195  395845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:04.058205  395845 out.go:374] Setting ErrFile to fd 2...
	I1017 20:13:04.058210  395845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:04.058436  395845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:13:04.058990  395845 out.go:368] Setting JSON to false
	I1017 20:13:04.060386  395845 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6932,"bootTime":1760725052,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:13:04.060483  395845 start.go:141] virtualization: kvm guest
	I1017 20:13:04.062422  395845 out.go:179] * [embed-certs-051488] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:13:04.063786  395845 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:13:04.063804  395845 notify.go:220] Checking for updates...
	I1017 20:13:04.066679  395845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:13:04.067970  395845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:13:04.072949  395845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:13:04.074279  395845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:13:04.075611  395845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:13:04.077287  395845 config.go:182] Loaded profile config "embed-certs-051488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:04.077805  395845 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:13:04.101800  395845 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:13:04.101908  395845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:13:04.164788  395845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:13:04.155127043 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:13:04.164904  395845 docker.go:318] overlay module found
	I1017 20:13:04.166781  395845 out.go:179] * Using the docker driver based on existing profile
	I1017 20:13:04.168070  395845 start.go:305] selected driver: docker
	I1017 20:13:04.168091  395845 start.go:925] validating driver "docker" against &{Name:embed-certs-051488 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-051488 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:13:04.168202  395845 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:13:04.168883  395845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:13:04.232575  395845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:13:04.219903315 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:13:04.233013  395845 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:13:04.233046  395845 cni.go:84] Creating CNI manager for ""
	I1017 20:13:04.233115  395845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:13:04.233169  395845 start.go:349] cluster config:
	{Name:embed-certs-051488 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-051488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:13:04.234877  395845 out.go:179] * Starting "embed-certs-051488" primary control-plane node in "embed-certs-051488" cluster
	I1017 20:13:04.236012  395845 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:13:04.237238  395845 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:13:04.238458  395845 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:13:04.238500  395845 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:13:04.238516  395845 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:13:04.238529  395845 cache.go:58] Caching tarball of preloaded images
	I1017 20:13:04.238650  395845 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:13:04.238663  395845 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:13:04.238823  395845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/config.json ...
	I1017 20:13:04.263599  395845 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:13:04.263633  395845 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:13:04.263655  395845 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:13:04.263684  395845 start.go:360] acquireMachinesLock for embed-certs-051488: {Name:mk6afa1aece12c87fd06ad5337662430a71ab0ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:13:04.263762  395845 start.go:364] duration metric: took 58.169µs to acquireMachinesLock for "embed-certs-051488"
	I1017 20:13:04.263787  395845 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:13:04.263796  395845 fix.go:54] fixHost starting: 
	I1017 20:13:04.264133  395845 cli_runner.go:164] Run: docker container inspect embed-certs-051488 --format={{.State.Status}}
	I1017 20:13:04.288330  395845 fix.go:112] recreateIfNeeded on embed-certs-051488: state=Stopped err=<nil>
	W1017 20:13:04.288382  395845 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:13:04.050559  393424 addons.go:514] duration metric: took 2.318814614s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 20:13:04.428370  393424 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:13:04.436198  393424 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:13:04.436227  393424 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:13:04.928919  393424 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:13:04.934361  393424 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 20:13:04.935619  393424 api_server.go:141] control plane version: v1.34.1
	I1017 20:13:04.935639  393424 api_server.go:131] duration metric: took 3.007435204s to wait for apiserver health ...
	I1017 20:13:04.935650  393424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:13:04.940810  393424 system_pods.go:59] 8 kube-system pods found
	I1017 20:13:04.940855  393424 system_pods.go:61] "coredns-66bc5c9577-26q6r" [9f41e0e1-0ec5-4641-89b1-0c3489fd8ded] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:13:04.940864  393424 system_pods.go:61] "etcd-newest-cni-051083" [a0343ecd-b1ea-4a09-a05b-fba7a474213c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:13:04.940872  393424 system_pods.go:61] "kindnet-2k897" [30c67a93-f25e-435f-baf0-f939ba9859df] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 20:13:04.940878  393424 system_pods.go:61] "kube-apiserver-newest-cni-051083" [657a2192-282d-409f-8893-014d034cd42d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:13:04.940884  393424 system_pods.go:61] "kube-controller-manager-newest-cni-051083" [6f894a23-fc07-48df-b282-4e4335e3ca12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:13:04.940893  393424 system_pods.go:61] "kube-proxy-bv8fn" [e5deab5b-135e-40d2-8a6b-ec83d4c4fce5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:13:04.940899  393424 system_pods.go:61] "kube-scheduler-newest-cni-051083" [5ab00384-0333-49c7-a1ac-012b9d035066] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:13:04.940904  393424 system_pods.go:61] "storage-provisioner" [2699b8f0-5373-4f6e-8e29-f68953e6a741] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:13:04.940911  393424 system_pods.go:74] duration metric: took 5.25518ms to wait for pod list to return data ...
	I1017 20:13:04.940919  393424 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:13:04.943909  393424 default_sa.go:45] found service account: "default"
	I1017 20:13:04.943932  393424 default_sa.go:55] duration metric: took 3.007687ms for default service account to be created ...
	I1017 20:13:04.943944  393424 kubeadm.go:586] duration metric: took 3.212237584s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 20:13:04.943961  393424 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:13:04.946902  393424 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:13:04.946933  393424 node_conditions.go:123] node cpu capacity is 8
	I1017 20:13:04.946949  393424 node_conditions.go:105] duration metric: took 2.983443ms to run NodePressure ...
	I1017 20:13:04.946975  393424 start.go:241] waiting for startup goroutines ...
	I1017 20:13:04.946990  393424 start.go:246] waiting for cluster config update ...
	I1017 20:13:04.947004  393424 start.go:255] writing updated cluster config ...
	I1017 20:13:04.947315  393424 ssh_runner.go:195] Run: rm -f paused
	I1017 20:13:05.011023  393424 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:13:05.014969  393424 out.go:179] * Done! kubectl is now configured to use "newest-cni-051083" cluster and "default" namespace by default
	I1017 20:13:04.211879  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:13:04.212399  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:13:04.212470  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:13:04.212531  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:13:04.242645  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:13:04.242667  344862 cri.go:89] found id: ""
	I1017 20:13:04.242676  344862 logs.go:282] 1 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5]
	I1017 20:13:04.242774  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:04.247795  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:13:04.247873  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:13:04.285562  344862 cri.go:89] found id: ""
	I1017 20:13:04.285595  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.285606  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:13:04.285618  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:13:04.285676  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:13:04.322350  344862 cri.go:89] found id: ""
	I1017 20:13:04.322380  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.322392  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:13:04.322399  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:13:04.322462  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:13:04.358185  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:13:04.358206  344862 cri.go:89] found id: ""
	I1017 20:13:04.358214  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:13:04.358261  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:04.362681  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:13:04.362775  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:13:04.399527  344862 cri.go:89] found id: ""
	I1017 20:13:04.399558  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.399569  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:13:04.399577  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:13:04.399645  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:13:04.443818  344862 cri.go:89] found id: "a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:13:04.443845  344862 cri.go:89] found id: ""
	I1017 20:13:04.443856  344862 logs.go:282] 1 containers: [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54]
	I1017 20:13:04.443919  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:04.448756  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:13:04.448820  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:13:04.483018  344862 cri.go:89] found id: ""
	I1017 20:13:04.483052  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.483064  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:13:04.483072  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:13:04.483131  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:13:04.516610  344862 cri.go:89] found id: ""
	I1017 20:13:04.516643  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.516654  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:13:04.516666  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:13:04.516683  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:13:04.562489  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:13:04.562538  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:13:04.641377  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:13:04.641410  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:13:04.682504  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:13:04.682546  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:13:04.745368  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:13:04.745424  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:13:04.788178  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:13:04.788222  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:13:04.894186  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:13:04.894222  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:13:04.918292  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:13:04.918396  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:13:04.995522  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:13:07.496810  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:13:07.497321  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:13:07.497387  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:13:07.497487  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:13:07.528629  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:13:07.528659  344862 cri.go:89] found id: ""
	I1017 20:13:07.528670  344862 logs.go:282] 1 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5]
	I1017 20:13:07.528759  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:07.534258  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:13:07.534355  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:13:07.570129  344862 cri.go:89] found id: ""
	I1017 20:13:07.570152  344862 logs.go:282] 0 containers: []
	W1017 20:13:07.570160  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:13:07.570165  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:13:07.570210  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:13:07.599973  344862 cri.go:89] found id: ""
	I1017 20:13:07.600000  344862 logs.go:282] 0 containers: []
	W1017 20:13:07.600011  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:13:07.600019  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:13:07.600069  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:13:04.290523  395845 out.go:252] * Restarting existing docker container for "embed-certs-051488" ...
	I1017 20:13:04.290607  395845 cli_runner.go:164] Run: docker start embed-certs-051488
	I1017 20:13:04.613267  395845 cli_runner.go:164] Run: docker container inspect embed-certs-051488 --format={{.State.Status}}
	I1017 20:13:04.636176  395845 kic.go:430] container "embed-certs-051488" state is running.
	I1017 20:13:04.636661  395845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-051488
	I1017 20:13:04.661620  395845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/config.json ...
	I1017 20:13:04.661990  395845 machine.go:93] provisionDockerMachine start ...
	I1017 20:13:04.662087  395845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-051488
	I1017 20:13:04.687256  395845 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:04.687596  395845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1017 20:13:04.687621  395845 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:13:04.688406  395845 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36504->127.0.0.1:33209: read: connection reset by peer
	I1017 20:13:07.837957  395845 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-051488
	
	I1017 20:13:07.837989  395845 ubuntu.go:182] provisioning hostname "embed-certs-051488"
	I1017 20:13:07.838054  395845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-051488
	I1017 20:13:07.858992  395845 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:07.859286  395845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1017 20:13:07.859311  395845 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-051488 && echo "embed-certs-051488" | sudo tee /etc/hostname
	I1017 20:13:08.017121  395845 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-051488
	
	I1017 20:13:08.017249  395845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-051488
	I1017 20:13:08.041239  395845 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:08.041501  395845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1017 20:13:08.041522  395845 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-051488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-051488/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-051488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:13:08.190894  395845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:13:08.190930  395845 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:13:08.190982  395845 ubuntu.go:190] setting up certificates
	I1017 20:13:08.190995  395845 provision.go:84] configureAuth start
	I1017 20:13:08.191062  395845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-051488
	I1017 20:13:08.218901  395845 provision.go:143] copyHostCerts
	I1017 20:13:08.218987  395845 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:13:08.219009  395845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:13:08.219121  395845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:13:08.219297  395845 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:13:08.219311  395845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:13:08.219371  395845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:13:08.219501  395845 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:13:08.219513  395845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:13:08.219558  395845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:13:08.219661  395845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.embed-certs-051488 san=[127.0.0.1 192.168.94.2 embed-certs-051488 localhost minikube]
	I1017 20:13:08.336195  395845 provision.go:177] copyRemoteCerts
	I1017 20:13:08.336287  395845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:13:08.336340  395845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-051488
	I1017 20:13:08.356606  395845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/embed-certs-051488/id_rsa Username:docker}
	I1017 20:13:08.462628  395845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:13:08.484867  395845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:13:08.506228  395845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:13:08.528488  395845 provision.go:87] duration metric: took 337.478789ms to configureAuth
	I1017 20:13:08.528528  395845 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:13:08.528792  395845 config.go:182] Loaded profile config "embed-certs-051488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:08.528916  395845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-051488
	I1017 20:13:08.551882  395845 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:08.552098  395845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1017 20:13:08.552115  395845 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:13:08.894393  395845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:13:08.894424  395845 machine.go:96] duration metric: took 4.232412471s to provisionDockerMachine
	I1017 20:13:08.894437  395845 start.go:293] postStartSetup for "embed-certs-051488" (driver="docker")
	I1017 20:13:08.894452  395845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:13:08.894516  395845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:13:08.894571  395845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-051488
	I1017 20:13:08.917869  395845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/embed-certs-051488/id_rsa Username:docker}
	I1017 20:13:09.022576  395845 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:13:09.028481  395845 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:13:09.028517  395845 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:13:09.028531  395845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:13:09.028593  395845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:13:09.028716  395845 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:13:09.028853  395845 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:13:09.043957  395845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	
	
	==> CRI-O <==
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.258667944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.262636016Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=916f1059-2d31-43d2-9116-1f10f109906a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.264526113Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1ea0aba1-33cd-4a0a-8b70-fa522ea03a74 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.265897616Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.266679343Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.266892785Z" level=info msg="Ran pod sandbox f3cf8740fdcdff033b2d13af8f653e59f4102699089dacb04dfeb0f4ef6cc9e9 with infra container: kube-system/kindnet-2k897/POD" id=916f1059-2d31-43d2-9116-1f10f109906a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.267613718Z" level=info msg="Ran pod sandbox 9fa326afa63b094f584232687e425763391f1adfd889bfbff2f3792ed5b56ab4 with infra container: kube-system/kube-proxy-bv8fn/POD" id=1ea0aba1-33cd-4a0a-8b70-fa522ea03a74 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.268845139Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=24c777f0-5da2-4b5f-a456-0ea274a6bdc2 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.26949417Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b878f69f-b3ee-40c8-981a-338059eaff53 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.269906964Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=05bf6d6a-23ec-476a-a599-6ee72c4ddba1 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.270607588Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=57ad1a5e-4b22-40d4-9f3f-245a342c7e35 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.271085118Z" level=info msg="Creating container: kube-system/kindnet-2k897/kindnet-cni" id=5d151bf2-5661-4795-8011-c542e916bdd4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.27143884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.275226119Z" level=info msg="Creating container: kube-system/kube-proxy-bv8fn/kube-proxy" id=68c03621-7196-44b8-8236-5c84016c7db4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.275687978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.275798179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.277071468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.281603995Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.282406381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.3052276Z" level=info msg="Created container 73273a320d039563a683bba50e23eb61ca48cf1f1c34584dd5e722a6cfb37dfd: kube-system/kindnet-2k897/kindnet-cni" id=5d151bf2-5661-4795-8011-c542e916bdd4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.306076424Z" level=info msg="Starting container: 73273a320d039563a683bba50e23eb61ca48cf1f1c34584dd5e722a6cfb37dfd" id=699950b2-9b2b-48b7-859b-bd7f49d04ae5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.30846758Z" level=info msg="Started container" PID=1041 containerID=73273a320d039563a683bba50e23eb61ca48cf1f1c34584dd5e722a6cfb37dfd description=kube-system/kindnet-2k897/kindnet-cni id=699950b2-9b2b-48b7-859b-bd7f49d04ae5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3cf8740fdcdff033b2d13af8f653e59f4102699089dacb04dfeb0f4ef6cc9e9
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.310595242Z" level=info msg="Created container ed99414bac6b9ea79f328b7ccf57871536ced1a403fe8460b0da75d47e736716: kube-system/kube-proxy-bv8fn/kube-proxy" id=68c03621-7196-44b8-8236-5c84016c7db4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.311631006Z" level=info msg="Starting container: ed99414bac6b9ea79f328b7ccf57871536ced1a403fe8460b0da75d47e736716" id=66a2b69e-0c31-4d54-8286-59daa84874f7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:13:04 newest-cni-051083 crio[514]: time="2025-10-17T20:13:04.315206561Z" level=info msg="Started container" PID=1042 containerID=ed99414bac6b9ea79f328b7ccf57871536ced1a403fe8460b0da75d47e736716 description=kube-system/kube-proxy-bv8fn/kube-proxy id=66a2b69e-0c31-4d54-8286-59daa84874f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9fa326afa63b094f584232687e425763391f1adfd889bfbff2f3792ed5b56ab4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ed99414bac6b9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   9fa326afa63b0       kube-proxy-bv8fn                            kube-system
	73273a320d039       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   f3cf8740fdcdf       kindnet-2k897                               kube-system
	cc42dfd84be4f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   41bfc77de3dee       etcd-newest-cni-051083                      kube-system
	2931c4d6f33f5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   10d0b7e46743f       kube-scheduler-newest-cni-051083            kube-system
	932b7d1eb64f5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   7e2eb680324d9       kube-apiserver-newest-cni-051083            kube-system
	b96c4e8ab4485       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   5ab79b9634362       kube-controller-manager-newest-cni-051083   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-051083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-051083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=newest-cni-051083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_12_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:12:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-051083
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:13:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:13:03 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:13:03 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:13:03 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 20:13:03 +0000   Fri, 17 Oct 2025 20:12:38 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-051083
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                f6bb8511-2049-4150-aef2-f04e212d38cd
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-051083                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-2k897                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-051083             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-051083    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-bv8fn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-051083             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s (x8 over 32s)  kubelet          Node newest-cni-051083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s (x8 over 32s)  kubelet          Node newest-cni-051083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x8 over 32s)  kubelet          Node newest-cni-051083 status is now: NodeHasSufficientPID
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    27s                kubelet          Node newest-cni-051083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s                kubelet          Node newest-cni-051083 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27s                kubelet          Node newest-cni-051083 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           23s                node-controller  Node newest-cni-051083 event: Registered Node newest-cni-051083 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 10s)   kubelet          Node newest-cni-051083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 10s)   kubelet          Node newest-cni-051083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 10s)   kubelet          Node newest-cni-051083 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-051083 event: Registered Node newest-cni-051083 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [cc42dfd84be4f5af1ec837f817b2596783c4cf948909c641a75e07dfb52e9d71] <==
	{"level":"warn","ts":"2025-10-17T20:13:02.785479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.794083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.802692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.809811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.816512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.823312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.830224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.837332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.844036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.850644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.857911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.864248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.871562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.878329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.891388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.898106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.905323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.911639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.918503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.925775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.941174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.944903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.951211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:02.957947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:03.005805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33616","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:13:10 up  1:55,  0 user,  load average: 6.60, 4.87, 2.94
	Linux newest-cni-051083 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [73273a320d039563a683bba50e23eb61ca48cf1f1c34584dd5e722a6cfb37dfd] <==
	I1017 20:13:04.580685       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:13:04.580953       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1017 20:13:04.581088       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:13:04.581106       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:13:04.581134       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:13:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:13:04.785753       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:13:04.785785       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:13:04.785809       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:13:04.785950       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:13:05.086895       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:13:05.086929       1 metrics.go:72] Registering metrics
	I1017 20:13:05.086986       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [932b7d1eb64f55b7e3fb460e5b9d3ffa1644b7ab3e1b81d603893cd983f9ba2b] <==
	I1017 20:13:03.472387       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:13:03.472396       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:13:03.469464       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:13:03.470794       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:13:03.472895       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:13:03.470819       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:13:03.470877       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:13:03.478271       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1017 20:13:03.480731       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:13:03.484238       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 20:13:03.487358       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:13:03.492077       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:13:03.492107       1 policy_source.go:240] refreshing policies
	I1017 20:13:03.513240       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:13:03.805106       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:13:03.839982       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:13:03.870458       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:13:03.879469       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:13:03.888939       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:13:03.927599       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.58.128"}
	I1017 20:13:03.943219       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.126.117"}
	I1017 20:13:04.373845       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:13:06.808458       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:13:07.208466       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:13:07.307039       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b96c4e8ab4485ef16cba36dad44b2b04cf5d5e7a68f7e5de57f6c0d891d205c6] <==
	I1017 20:13:06.775299       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:13:06.779556       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 20:13:06.779681       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:13:06.779791       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-051083"
	I1017 20:13:06.779846       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 20:13:06.803939       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 20:13:06.803954       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:13:06.803973       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 20:13:06.804163       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:13:06.804687       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:13:06.806258       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:13:06.809761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:13:06.809925       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:13:06.809986       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:13:06.810019       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:13:06.810025       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:13:06.810031       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:13:06.810133       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:13:06.810156       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:13:06.811848       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:13:06.817822       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:13:06.819860       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:13:06.822124       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:13:06.822134       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:13:06.826946       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [ed99414bac6b9ea79f328b7ccf57871536ced1a403fe8460b0da75d47e736716] <==
	I1017 20:13:04.359930       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:13:04.423662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:13:04.524572       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:13:04.524611       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1017 20:13:04.524793       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:13:04.549752       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:13:04.549916       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:13:04.557664       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:13:04.558102       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:13:04.558132       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:04.560361       1 config.go:200] "Starting service config controller"
	I1017 20:13:04.560414       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:13:04.560454       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:13:04.560477       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:13:04.560532       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:13:04.560555       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:13:04.561156       1 config.go:309] "Starting node config controller"
	I1017 20:13:04.561568       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:13:04.561608       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:13:04.660657       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:13:04.660687       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 20:13:04.660661       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2931c4d6f33f556407ac0a8d56dd07ee89f89feffa910248e7bebee0bbe9f80d] <==
	I1017 20:13:02.452629       1 serving.go:386] Generated self-signed cert in-memory
	W1017 20:13:03.397200       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:13:03.397280       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:13:03.397295       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:13:03.397321       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:13:03.454947       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:13:03.454985       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:03.458328       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:03.458385       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:03.459446       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:13:03.459539       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 20:13:03.461952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1017 20:13:04.558924       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:13:02 newest-cni-051083 kubelet[668]: E1017 20:13:02.992299     668 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-051083\" not found" node="newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.451986     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.507878     668 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.508000     668 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.508037     668 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.508951     668 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.528930     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: E1017 20:13:03.536998     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-051083\" already exists" pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: E1017 20:13:03.572836     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-051083\" already exists" pod="kube-system/etcd-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.572878     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: E1017 20:13:03.581338     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-051083\" already exists" pod="kube-system/kube-apiserver-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.581389     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: E1017 20:13:03.587982     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-051083\" already exists" pod="kube-system/kube-controller-manager-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.588019     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: E1017 20:13:03.594942     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-051083\" already exists" pod="kube-system/kube-scheduler-newest-cni-051083"
	Oct 17 20:13:03 newest-cni-051083 kubelet[668]: I1017 20:13:03.949908     668 apiserver.go:52] "Watching apiserver"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.052426     668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.107545     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5deab5b-135e-40d2-8a6b-ec83d4c4fce5-xtables-lock\") pod \"kube-proxy-bv8fn\" (UID: \"e5deab5b-135e-40d2-8a6b-ec83d4c4fce5\") " pod="kube-system/kube-proxy-bv8fn"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.107600     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30c67a93-f25e-435f-baf0-f939ba9859df-xtables-lock\") pod \"kindnet-2k897\" (UID: \"30c67a93-f25e-435f-baf0-f939ba9859df\") " pod="kube-system/kindnet-2k897"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.107623     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5deab5b-135e-40d2-8a6b-ec83d4c4fce5-lib-modules\") pod \"kube-proxy-bv8fn\" (UID: \"e5deab5b-135e-40d2-8a6b-ec83d4c4fce5\") " pod="kube-system/kube-proxy-bv8fn"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.107731     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/30c67a93-f25e-435f-baf0-f939ba9859df-cni-cfg\") pod \"kindnet-2k897\" (UID: \"30c67a93-f25e-435f-baf0-f939ba9859df\") " pod="kube-system/kindnet-2k897"
	Oct 17 20:13:04 newest-cni-051083 kubelet[668]: I1017 20:13:04.107915     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30c67a93-f25e-435f-baf0-f939ba9859df-lib-modules\") pod \"kindnet-2k897\" (UID: \"30c67a93-f25e-435f-baf0-f939ba9859df\") " pod="kube-system/kindnet-2k897"
	Oct 17 20:13:05 newest-cni-051083 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:13:05 newest-cni-051083 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:13:05 newest-cni-051083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-051083 -n newest-cni-051083
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-051083 -n newest-cni-051083: exit status 2 (360.915356ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-051083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-26q6r storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hpfnv kubernetes-dashboard-855c9754f9-nvzzl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-051083 describe pod coredns-66bc5c9577-26q6r storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hpfnv kubernetes-dashboard-855c9754f9-nvzzl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-051083 describe pod coredns-66bc5c9577-26q6r storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hpfnv kubernetes-dashboard-855c9754f9-nvzzl: exit status 1 (74.577144ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-26q6r" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-hpfnv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-nvzzl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-051083 describe pod coredns-66bc5c9577-26q6r storage-provisioner dashboard-metrics-scraper-6ffb444bf9-hpfnv kubernetes-dashboard-855c9754f9-nvzzl: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-563805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-563805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (265.31157ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:13:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-563805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-563805 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-563805 describe deploy/metrics-server -n kube-system: exit status 1 (72.119044ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-563805 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-563805
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-563805:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655",
	        "Created": "2025-10-17T20:12:20.619875365Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 384613,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:12:21.214190534Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/hostname",
	        "HostsPath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/hosts",
	        "LogPath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655-json.log",
	        "Name": "/default-k8s-diff-port-563805",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-563805:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-563805",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655",
	                "LowerDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-563805",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-563805/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-563805",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-563805",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-563805",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "416e4b737af4526ed4698e175dcdc1b9e7a1a1a6f5f378a7baeee87452af2c9f",
	            "SandboxKey": "/var/run/docker/netns/416e4b737af4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33198"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33197"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-563805": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:81:f3:79:43:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9a4aaba57340b08a6dc80d718ca509a23c5f23e099fc7d8315ee78ac47b427de",
	                    "EndpointID": "2de0711d8088f86142927ac2ac1db066269a9795e659dfa7ec553b39b2c0f7c9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-563805",
	                        "7567eb504598"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-563805 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-563805 logs -n 25: (1.199059896s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-726816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ pause   │ -p old-k8s-version-726816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │                     │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ delete  │ -p old-k8s-version-726816                                                                                                                                                                                                                     │ old-k8s-version-726816       │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p cert-expiration-202048 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-202048       │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ image   │ no-preload-449580 image list --format=json                                                                                                                                                                                                    │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ pause   │ -p no-preload-449580 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ delete  │ -p cert-expiration-202048                                                                                                                                                                                                                     │ cert-expiration-202048       │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p disable-driver-mounts-270495                                                                                                                                                                                                               │ disable-driver-mounts-270495 │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p no-preload-449580                                                                                                                                                                                                                          │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ delete  │ -p no-preload-449580                                                                                                                                                                                                                          │ no-preload-449580            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ addons  │ enable metrics-server -p embed-certs-051488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ stop    │ -p embed-certs-051488 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:13 UTC │
	│ addons  │ enable metrics-server -p newest-cni-051083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ stop    │ -p newest-cni-051083 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-051083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:13 UTC │
	│ addons  │ enable dashboard -p embed-certs-051488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ image   │ newest-cni-051083 image list --format=json                                                                                                                                                                                                    │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ pause   │ -p newest-cni-051083 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-563805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:13:04
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:13:04.057932  395845 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:13:04.058195  395845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:04.058205  395845 out.go:374] Setting ErrFile to fd 2...
	I1017 20:13:04.058210  395845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:04.058436  395845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:13:04.058990  395845 out.go:368] Setting JSON to false
	I1017 20:13:04.060386  395845 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6932,"bootTime":1760725052,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:13:04.060483  395845 start.go:141] virtualization: kvm guest
	I1017 20:13:04.062422  395845 out.go:179] * [embed-certs-051488] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:13:04.063786  395845 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:13:04.063804  395845 notify.go:220] Checking for updates...
	I1017 20:13:04.066679  395845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:13:04.067970  395845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:13:04.072949  395845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:13:04.074279  395845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:13:04.075611  395845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:13:04.077287  395845 config.go:182] Loaded profile config "embed-certs-051488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:04.077805  395845 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:13:04.101800  395845 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:13:04.101908  395845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:13:04.164788  395845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:13:04.155127043 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:13:04.164904  395845 docker.go:318] overlay module found
	I1017 20:13:04.166781  395845 out.go:179] * Using the docker driver based on existing profile
	I1017 20:13:04.168070  395845 start.go:305] selected driver: docker
	I1017 20:13:04.168091  395845 start.go:925] validating driver "docker" against &{Name:embed-certs-051488 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-051488 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:13:04.168202  395845 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:13:04.168883  395845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:13:04.232575  395845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:13:04.219903315 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:13:04.233013  395845 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:13:04.233046  395845 cni.go:84] Creating CNI manager for ""
	I1017 20:13:04.233115  395845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:13:04.233169  395845 start.go:349] cluster config:
	{Name:embed-certs-051488 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-051488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:13:04.234877  395845 out.go:179] * Starting "embed-certs-051488" primary control-plane node in "embed-certs-051488" cluster
	I1017 20:13:04.236012  395845 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:13:04.237238  395845 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:13:04.238458  395845 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:13:04.238500  395845 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:13:04.238516  395845 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:13:04.238529  395845 cache.go:58] Caching tarball of preloaded images
	I1017 20:13:04.238650  395845 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:13:04.238663  395845 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:13:04.238823  395845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/embed-certs-051488/config.json ...
	I1017 20:13:04.263599  395845 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:13:04.263633  395845 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:13:04.263655  395845 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:13:04.263684  395845 start.go:360] acquireMachinesLock for embed-certs-051488: {Name:mk6afa1aece12c87fd06ad5337662430a71ab0ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:13:04.263762  395845 start.go:364] duration metric: took 58.169µs to acquireMachinesLock for "embed-certs-051488"
	I1017 20:13:04.263787  395845 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:13:04.263796  395845 fix.go:54] fixHost starting: 
	I1017 20:13:04.264133  395845 cli_runner.go:164] Run: docker container inspect embed-certs-051488 --format={{.State.Status}}
	I1017 20:13:04.288330  395845 fix.go:112] recreateIfNeeded on embed-certs-051488: state=Stopped err=<nil>
	W1017 20:13:04.288382  395845 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:13:04.050559  393424 addons.go:514] duration metric: took 2.318814614s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 20:13:04.428370  393424 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:13:04.436198  393424 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:13:04.436227  393424 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:13:04.928919  393424 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:13:04.934361  393424 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 20:13:04.935619  393424 api_server.go:141] control plane version: v1.34.1
	I1017 20:13:04.935639  393424 api_server.go:131] duration metric: took 3.007435204s to wait for apiserver health ...
	I1017 20:13:04.935650  393424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:13:04.940810  393424 system_pods.go:59] 8 kube-system pods found
	I1017 20:13:04.940855  393424 system_pods.go:61] "coredns-66bc5c9577-26q6r" [9f41e0e1-0ec5-4641-89b1-0c3489fd8ded] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:13:04.940864  393424 system_pods.go:61] "etcd-newest-cni-051083" [a0343ecd-b1ea-4a09-a05b-fba7a474213c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:13:04.940872  393424 system_pods.go:61] "kindnet-2k897" [30c67a93-f25e-435f-baf0-f939ba9859df] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 20:13:04.940878  393424 system_pods.go:61] "kube-apiserver-newest-cni-051083" [657a2192-282d-409f-8893-014d034cd42d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:13:04.940884  393424 system_pods.go:61] "kube-controller-manager-newest-cni-051083" [6f894a23-fc07-48df-b282-4e4335e3ca12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:13:04.940893  393424 system_pods.go:61] "kube-proxy-bv8fn" [e5deab5b-135e-40d2-8a6b-ec83d4c4fce5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:13:04.940899  393424 system_pods.go:61] "kube-scheduler-newest-cni-051083" [5ab00384-0333-49c7-a1ac-012b9d035066] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:13:04.940904  393424 system_pods.go:61] "storage-provisioner" [2699b8f0-5373-4f6e-8e29-f68953e6a741] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:13:04.940911  393424 system_pods.go:74] duration metric: took 5.25518ms to wait for pod list to return data ...
	I1017 20:13:04.940919  393424 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:13:04.943909  393424 default_sa.go:45] found service account: "default"
	I1017 20:13:04.943932  393424 default_sa.go:55] duration metric: took 3.007687ms for default service account to be created ...
	I1017 20:13:04.943944  393424 kubeadm.go:586] duration metric: took 3.212237584s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 20:13:04.943961  393424 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:13:04.946902  393424 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:13:04.946933  393424 node_conditions.go:123] node cpu capacity is 8
	I1017 20:13:04.946949  393424 node_conditions.go:105] duration metric: took 2.983443ms to run NodePressure ...
	I1017 20:13:04.946975  393424 start.go:241] waiting for startup goroutines ...
	I1017 20:13:04.946990  393424 start.go:246] waiting for cluster config update ...
	I1017 20:13:04.947004  393424 start.go:255] writing updated cluster config ...
	I1017 20:13:04.947315  393424 ssh_runner.go:195] Run: rm -f paused
	I1017 20:13:05.011023  393424 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:13:05.014969  393424 out.go:179] * Done! kubectl is now configured to use "newest-cni-051083" cluster and "default" namespace by default
	I1017 20:13:04.211879  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:13:04.212399  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:13:04.212470  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:13:04.212531  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:13:04.242645  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:13:04.242667  344862 cri.go:89] found id: ""
	I1017 20:13:04.242676  344862 logs.go:282] 1 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5]
	I1017 20:13:04.242774  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:04.247795  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:13:04.247873  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:13:04.285562  344862 cri.go:89] found id: ""
	I1017 20:13:04.285595  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.285606  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:13:04.285618  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:13:04.285676  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:13:04.322350  344862 cri.go:89] found id: ""
	I1017 20:13:04.322380  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.322392  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:13:04.322399  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:13:04.322462  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 20:13:04.358185  344862 cri.go:89] found id: "ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:13:04.358206  344862 cri.go:89] found id: ""
	I1017 20:13:04.358214  344862 logs.go:282] 1 containers: [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497]
	I1017 20:13:04.358261  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:04.362681  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 20:13:04.362775  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 20:13:04.399527  344862 cri.go:89] found id: ""
	I1017 20:13:04.399558  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.399569  344862 logs.go:284] No container was found matching "kube-proxy"
	I1017 20:13:04.399577  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 20:13:04.399645  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 20:13:04.443818  344862 cri.go:89] found id: "a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:13:04.443845  344862 cri.go:89] found id: ""
	I1017 20:13:04.443856  344862 logs.go:282] 1 containers: [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54]
	I1017 20:13:04.443919  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:04.448756  344862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 20:13:04.448820  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 20:13:04.483018  344862 cri.go:89] found id: ""
	I1017 20:13:04.483052  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.483064  344862 logs.go:284] No container was found matching "kindnet"
	I1017 20:13:04.483072  344862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1017 20:13:04.483131  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1017 20:13:04.516610  344862 cri.go:89] found id: ""
	I1017 20:13:04.516643  344862 logs.go:282] 0 containers: []
	W1017 20:13:04.516654  344862 logs.go:284] No container was found matching "storage-provisioner"
	I1017 20:13:04.516666  344862 logs.go:123] Gathering logs for kube-apiserver [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5] ...
	I1017 20:13:04.516683  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:13:04.562489  344862 logs.go:123] Gathering logs for kube-scheduler [ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497] ...
	I1017 20:13:04.562538  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ba69fc8e2336600a5b4aaee3673ff2b7efcd1beb539dde568f7385ed6482d497"
	I1017 20:13:04.641377  344862 logs.go:123] Gathering logs for kube-controller-manager [a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54] ...
	I1017 20:13:04.641410  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a6997224c00c9349126cf66e4f846d4917758567d18ef0a7119d3c600603cc54"
	I1017 20:13:04.682504  344862 logs.go:123] Gathering logs for CRI-O ...
	I1017 20:13:04.682546  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 20:13:04.745368  344862 logs.go:123] Gathering logs for container status ...
	I1017 20:13:04.745424  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 20:13:04.788178  344862 logs.go:123] Gathering logs for kubelet ...
	I1017 20:13:04.788222  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 20:13:04.894186  344862 logs.go:123] Gathering logs for dmesg ...
	I1017 20:13:04.894222  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 20:13:04.918292  344862 logs.go:123] Gathering logs for describe nodes ...
	I1017 20:13:04.918396  344862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 20:13:04.995522  344862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 20:13:07.496810  344862 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:13:07.497321  344862 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1017 20:13:07.497387  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 20:13:07.497487  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 20:13:07.528629  344862 cri.go:89] found id: "368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5"
	I1017 20:13:07.528659  344862 cri.go:89] found id: ""
	I1017 20:13:07.528670  344862 logs.go:282] 1 containers: [368a356d6efceef1f95d0fce54e96b795abae17c890f7b61401f53dd1d2394e5]
	I1017 20:13:07.528759  344862 ssh_runner.go:195] Run: which crictl
	I1017 20:13:07.534258  344862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 20:13:07.534355  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 20:13:07.570129  344862 cri.go:89] found id: ""
	I1017 20:13:07.570152  344862 logs.go:282] 0 containers: []
	W1017 20:13:07.570160  344862 logs.go:284] No container was found matching "etcd"
	I1017 20:13:07.570165  344862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 20:13:07.570210  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 20:13:07.599973  344862 cri.go:89] found id: ""
	I1017 20:13:07.600000  344862 logs.go:282] 0 containers: []
	W1017 20:13:07.600011  344862 logs.go:284] No container was found matching "coredns"
	I1017 20:13:07.600019  344862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 20:13:07.600069  344862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	
	
	==> CRI-O <==
	Oct 17 20:12:55 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:55.713407203Z" level=info msg="Starting container: 24cefce47980ef85448c6a9277caa35ce2c51e20df9a29de12227bab1c01d2ba" id=f85401e5-eebf-4051-bd4e-97c23c807a75 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:12:55 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:55.715607308Z" level=info msg="Started container" PID=1832 containerID=24cefce47980ef85448c6a9277caa35ce2c51e20df9a29de12227bab1c01d2ba description=kube-system/coredns-66bc5c9577-bsp94/coredns id=f85401e5-eebf-4051-bd4e-97c23c807a75 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45e45f8b7e9d8f15219ca1736e03982b55bc48e9e9d4f49601d3ac92c6d9464a
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.351972498Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b8f8f7d8-e119-44f8-9e1c-222f221ddb50 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.352055924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.357089873Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:07a13b3ece38fcc465313b0d6c6d5db7c40caef547d9089bc7312677f9bde648 UID:4b56d57d-2571-48ae-86e3-1ba948f2a6fa NetNS:/var/run/netns/69f025ef-b163-41bb-87bf-31c001aa88a7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000784a20}] Aliases:map[]}"
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.357123098Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.368590258Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:07a13b3ece38fcc465313b0d6c6d5db7c40caef547d9089bc7312677f9bde648 UID:4b56d57d-2571-48ae-86e3-1ba948f2a6fa NetNS:/var/run/netns/69f025ef-b163-41bb-87bf-31c001aa88a7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000784a20}] Aliases:map[]}"
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.368715338Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.369540833Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.370417049Z" level=info msg="Ran pod sandbox 07a13b3ece38fcc465313b0d6c6d5db7c40caef547d9089bc7312677f9bde648 with infra container: default/busybox/POD" id=b8f8f7d8-e119-44f8-9e1c-222f221ddb50 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.371678302Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=29b8a2d4-957b-4f96-85f8-5427ecd3d6a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.371856776Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=29b8a2d4-957b-4f96-85f8-5427ecd3d6a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.372125369Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=29b8a2d4-957b-4f96-85f8-5427ecd3d6a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.373418816Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d92ee071-c1fb-4bc6-9369-d508a2b3cd26 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:12:58 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:12:58.376129364Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 20:13:00 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:13:00.299550442Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=d92ee071-c1fb-4bc6-9369-d508a2b3cd26 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:13:00 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:13:00.300371719Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=131f6ee5-2acc-4653-bcf9-50deb61729e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:00 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:13:00.301815513Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5a105d7f-66c1-4435-8fb1-b5c86f0a7701 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:00 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:13:00.305927223Z" level=info msg="Creating container: default/busybox/busybox" id=18ac347a-717b-458e-94fa-760010313991 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:00 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:13:00.306562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:00 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:13:00.310319355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:00 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:13:00.31070121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:00 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:13:00.335478037Z" level=info msg="Created container e3ec6851c3798e8313ab067740396fde82a66fa94cf2aaa954aedaafd8bc1b0e: default/busybox/busybox" id=18ac347a-717b-458e-94fa-760010313991 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:00 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:13:00.336174614Z" level=info msg="Starting container: e3ec6851c3798e8313ab067740396fde82a66fa94cf2aaa954aedaafd8bc1b0e" id=a62f44ea-bf0b-423d-95f7-20b94fdaccf6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:13:00 default-k8s-diff-port-563805 crio[780]: time="2025-10-17T20:13:00.338073413Z" level=info msg="Started container" PID=1909 containerID=e3ec6851c3798e8313ab067740396fde82a66fa94cf2aaa954aedaafd8bc1b0e description=default/busybox/busybox id=a62f44ea-bf0b-423d-95f7-20b94fdaccf6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=07a13b3ece38fcc465313b0d6c6d5db7c40caef547d9089bc7312677f9bde648
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	e3ec6851c3798       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   07a13b3ece38f       busybox                                                default
	24cefce47980e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   45e45f8b7e9d8       coredns-66bc5c9577-bsp94                               kube-system
	05e4ebf79fa22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   b856c4807e388       storage-provisioner                                    kube-system
	d84eb6eefcea3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   d8dbf6cb141ee       kindnet-gzsxs                                          kube-system
	b93a20fb0b7ba       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   0de9cfdca1b2a       kube-proxy-g7749                                       kube-system
	13c503cbb2833       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   d817215528588       etcd-default-k8s-diff-port-563805                      kube-system
	0ad9ebc0fffef       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   ba2e14cf2d7fa       kube-scheduler-default-k8s-diff-port-563805            kube-system
	d14aa56f37d15       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   06b067490bb41       kube-apiserver-default-k8s-diff-port-563805            kube-system
	1989bd2fadfed       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   3fef97546be12       kube-controller-manager-default-k8s-diff-port-563805   kube-system
	
	
	==> coredns [24cefce47980ef85448c6a9277caa35ce2c51e20df9a29de12227bab1c01d2ba] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53539 - 28016 "HINFO IN 8605134859901475596.596056718039487377. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.129331454s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-563805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-563805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=default-k8s-diff-port-563805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_12_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:12:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-563805
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:13:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:13:09 +0000   Fri, 17 Oct 2025 20:12:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:13:09 +0000   Fri, 17 Oct 2025 20:12:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:13:09 +0000   Fri, 17 Oct 2025 20:12:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:13:09 +0000   Fri, 17 Oct 2025 20:12:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-563805
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8216883e-3ed5-4f7d-8ef7-444b758f4457
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-bsp94                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-563805                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-gzsxs                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-563805             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-563805    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-g7749                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-563805             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node default-k8s-diff-port-563805 event: Registered Node default-k8s-diff-port-563805 in Controller
	  Normal  NodeReady                14s   kubelet          Node default-k8s-diff-port-563805 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [13c503cbb28330cc2611fba86a838ffdf25373a55ccce72e5175d733b7b027f4] <==
	{"level":"warn","ts":"2025-10-17T20:12:35.516189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.525848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.533319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.540887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.548759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.563062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.568298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.574432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.582726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.589358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.597267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.604999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.611923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.618449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.626442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.634426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.641697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.649389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.656994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.665612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.678995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.683434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.690830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.697929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:12:35.754292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56606","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:13:09 up  1:55,  0 user,  load average: 6.60, 4.87, 2.94
	Linux default-k8s-diff-port-563805 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d84eb6eefcea35ef21de2ae9b67dfbb1a928008bd3f86b774c66929cc85688ce] <==
	I1017 20:12:44.879575       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:12:44.879847       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 20:12:44.879969       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:12:44.879986       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:12:44.880006       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:12:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:12:45.170725       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:12:45.177519       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:12:45.269271       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:12:45.269486       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:12:45.470098       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:12:45.470261       1 metrics.go:72] Registering metrics
	I1017 20:12:45.470474       1 controller.go:711] "Syncing nftables rules"
	I1017 20:12:55.172822       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:12:55.172913       1 main.go:301] handling current node
	I1017 20:13:05.172829       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:13:05.172886       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d14aa56f37d1540b10df7b59feedceffe9c5d7f28c340a49fbccaeff3699346f] <==
	I1017 20:12:36.250939       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:12:36.253517       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:12:36.253975       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 20:12:36.259341       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:12:36.259561       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:12:36.274674       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 20:12:36.436100       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:12:37.154672       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:12:37.159041       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:12:37.159060       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:12:37.780138       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:12:37.831808       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:12:37.960765       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:12:37.968105       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1017 20:12:37.969375       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:12:37.974247       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:12:38.176664       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:12:39.047692       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:12:39.063278       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:12:39.072107       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 20:12:44.034338       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:12:44.047396       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:12:44.082051       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:12:44.231395       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1017 20:13:08.161670       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:48264: use of closed network connection
	
	
	==> kube-controller-manager [1989bd2fadfed8e067b931bf2b72a7d06bf4b909ac090aa31c83750387e3a244] <==
	I1017 20:12:43.176576       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:12:43.176883       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:12:43.177055       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:12:43.177066       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:12:43.177350       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:12:43.177601       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:12:43.177920       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:12:43.178285       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:12:43.178365       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:12:43.178432       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:12:43.178443       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:12:43.178626       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:12:43.178892       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:12:43.182485       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:12:43.182543       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:12:43.182581       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:12:43.182588       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:12:43.182595       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:12:43.184765       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:12:43.185945       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:12:43.190577       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-563805" podCIDRs=["10.244.0.0/24"]
	I1017 20:12:43.194424       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:12:43.202963       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 20:12:43.221881       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:12:58.140847       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b93a20fb0b7ba67c2568a7237e483e764f18c1e5261c3c4bf6a42736ea0d496c] <==
	I1017 20:12:44.720242       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:12:44.783472       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:12:44.883827       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:12:44.883875       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 20:12:44.883986       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:12:44.907037       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:12:44.907113       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:12:44.912897       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:12:44.913367       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:12:44.913401       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:12:44.914667       1 config.go:200] "Starting service config controller"
	I1017 20:12:44.914703       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:12:44.914830       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:12:44.914845       1 config.go:309] "Starting node config controller"
	I1017 20:12:44.914872       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:12:44.914880       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:12:44.914888       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:12:44.914908       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:12:44.914914       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:12:45.014832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:12:45.016127       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:12:45.016512       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0ad9ebc0fffef8b9e9ed73b1a98b5a29624d7a0090f8a35deaf636021c3996ce] <==
	E1017 20:12:36.208692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:12:36.208774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:12:36.208771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:12:36.208855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:12:36.208925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:12:36.208926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:12:36.209022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:12:36.209057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:12:37.055854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:12:37.078289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:12:37.112355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:12:37.139991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:12:37.204458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:12:37.222397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:12:37.259630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:12:37.264911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:12:37.300207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:12:37.432685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:12:37.448405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:12:37.476688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:12:37.484755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:12:37.523218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:12:37.551636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 20:12:37.568095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1017 20:12:39.603442       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:12:40 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:40.064981    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-563805" podStartSLOduration=1.064957223 podStartE2EDuration="1.064957223s" podCreationTimestamp="2025-10-17 20:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:40.05356305 +0000 UTC m=+1.211659781" watchObservedRunningTime="2025-10-17 20:12:40.064957223 +0000 UTC m=+1.223053959"
	Oct 17 20:12:40 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:40.083463    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-563805" podStartSLOduration=1.083439646 podStartE2EDuration="1.083439646s" podCreationTimestamp="2025-10-17 20:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:40.06516669 +0000 UTC m=+1.223263429" watchObservedRunningTime="2025-10-17 20:12:40.083439646 +0000 UTC m=+1.241536373"
	Oct 17 20:12:43 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:43.215821    1318 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 20:12:43 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:43.216559    1318 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 20:12:44 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:44.289653    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-563805" podStartSLOduration=5.289631402 podStartE2EDuration="5.289631402s" podCreationTimestamp="2025-10-17 20:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:40.084273556 +0000 UTC m=+1.242370290" watchObservedRunningTime="2025-10-17 20:12:44.289631402 +0000 UTC m=+5.447728138"
	Oct 17 20:12:44 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:44.372113    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eeb2f556-2ec6-4874-a910-c441e7cc0770-xtables-lock\") pod \"kindnet-gzsxs\" (UID: \"eeb2f556-2ec6-4874-a910-c441e7cc0770\") " pod="kube-system/kindnet-gzsxs"
	Oct 17 20:12:44 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:44.372189    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeb2f556-2ec6-4874-a910-c441e7cc0770-lib-modules\") pod \"kindnet-gzsxs\" (UID: \"eeb2f556-2ec6-4874-a910-c441e7cc0770\") " pod="kube-system/kindnet-gzsxs"
	Oct 17 20:12:44 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:44.372216    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5cvd\" (UniqueName: \"kubernetes.io/projected/eeb2f556-2ec6-4874-a910-c441e7cc0770-kube-api-access-h5cvd\") pod \"kindnet-gzsxs\" (UID: \"eeb2f556-2ec6-4874-a910-c441e7cc0770\") " pod="kube-system/kindnet-gzsxs"
	Oct 17 20:12:44 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:44.372243    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8-kube-proxy\") pod \"kube-proxy-g7749\" (UID: \"812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8\") " pod="kube-system/kube-proxy-g7749"
	Oct 17 20:12:44 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:44.372274    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8-xtables-lock\") pod \"kube-proxy-g7749\" (UID: \"812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8\") " pod="kube-system/kube-proxy-g7749"
	Oct 17 20:12:44 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:44.372343    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8-lib-modules\") pod \"kube-proxy-g7749\" (UID: \"812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8\") " pod="kube-system/kube-proxy-g7749"
	Oct 17 20:12:44 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:44.372404    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn4nm\" (UniqueName: \"kubernetes.io/projected/812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8-kube-api-access-jn4nm\") pod \"kube-proxy-g7749\" (UID: \"812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8\") " pod="kube-system/kube-proxy-g7749"
	Oct 17 20:12:44 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:44.372535    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eeb2f556-2ec6-4874-a910-c441e7cc0770-cni-cfg\") pod \"kindnet-gzsxs\" (UID: \"eeb2f556-2ec6-4874-a910-c441e7cc0770\") " pod="kube-system/kindnet-gzsxs"
	Oct 17 20:12:45 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:45.048933    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g7749" podStartSLOduration=1.048906206 podStartE2EDuration="1.048906206s" podCreationTimestamp="2025-10-17 20:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:45.032569038 +0000 UTC m=+6.190665774" watchObservedRunningTime="2025-10-17 20:12:45.048906206 +0000 UTC m=+6.207002943"
	Oct 17 20:12:46 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:46.758291    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gzsxs" podStartSLOduration=2.758272662 podStartE2EDuration="2.758272662s" podCreationTimestamp="2025-10-17 20:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:45.049239523 +0000 UTC m=+6.207336281" watchObservedRunningTime="2025-10-17 20:12:46.758272662 +0000 UTC m=+7.916369397"
	Oct 17 20:12:55 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:55.315771    1318 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 20:12:55 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:55.444925    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/654c455e-6dcf-46d1-8664-0c1579d0a498-tmp\") pod \"storage-provisioner\" (UID: \"654c455e-6dcf-46d1-8664-0c1579d0a498\") " pod="kube-system/storage-provisioner"
	Oct 17 20:12:55 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:55.444981    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf23fe6e-8ed0-4e40-92cd-65e6940b198d-config-volume\") pod \"coredns-66bc5c9577-bsp94\" (UID: \"bf23fe6e-8ed0-4e40-92cd-65e6940b198d\") " pod="kube-system/coredns-66bc5c9577-bsp94"
	Oct 17 20:12:55 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:55.445010    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlfwc\" (UniqueName: \"kubernetes.io/projected/bf23fe6e-8ed0-4e40-92cd-65e6940b198d-kube-api-access-nlfwc\") pod \"coredns-66bc5c9577-bsp94\" (UID: \"bf23fe6e-8ed0-4e40-92cd-65e6940b198d\") " pod="kube-system/coredns-66bc5c9577-bsp94"
	Oct 17 20:12:55 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:55.445031    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrvkm\" (UniqueName: \"kubernetes.io/projected/654c455e-6dcf-46d1-8664-0c1579d0a498-kube-api-access-vrvkm\") pod \"storage-provisioner\" (UID: \"654c455e-6dcf-46d1-8664-0c1579d0a498\") " pod="kube-system/storage-provisioner"
	Oct 17 20:12:56 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:56.056721    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.056698569 podStartE2EDuration="12.056698569s" podCreationTimestamp="2025-10-17 20:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:56.056601392 +0000 UTC m=+17.214698129" watchObservedRunningTime="2025-10-17 20:12:56.056698569 +0000 UTC m=+17.214795306"
	Oct 17 20:12:56 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:56.068734    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bsp94" podStartSLOduration=12.068713507 podStartE2EDuration="12.068713507s" podCreationTimestamp="2025-10-17 20:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:12:56.068368773 +0000 UTC m=+17.226465531" watchObservedRunningTime="2025-10-17 20:12:56.068713507 +0000 UTC m=+17.226810242"
	Oct 17 20:12:58 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:12:58.164925    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz9jr\" (UniqueName: \"kubernetes.io/projected/4b56d57d-2571-48ae-86e3-1ba948f2a6fa-kube-api-access-hz9jr\") pod \"busybox\" (UID: \"4b56d57d-2571-48ae-86e3-1ba948f2a6fa\") " pod="default/busybox"
	Oct 17 20:13:01 default-k8s-diff-port-563805 kubelet[1318]: I1017 20:13:01.074253    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.145455473 podStartE2EDuration="3.074227954s" podCreationTimestamp="2025-10-17 20:12:58 +0000 UTC" firstStartedPulling="2025-10-17 20:12:58.372505693 +0000 UTC m=+19.530602427" lastFinishedPulling="2025-10-17 20:13:00.301278176 +0000 UTC m=+21.459374908" observedRunningTime="2025-10-17 20:13:01.074132318 +0000 UTC m=+22.232229054" watchObservedRunningTime="2025-10-17 20:13:01.074227954 +0000 UTC m=+22.232324691"
	Oct 17 20:13:08 default-k8s-diff-port-563805 kubelet[1318]: E1017 20:13:08.161514    1318 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47540->127.0.0.1:36637: write tcp 127.0.0.1:47540->127.0.0.1:36637: write: broken pipe
	
	
	==> storage-provisioner [05e4ebf79fa22a3179becfdc89a88cf4ce0077d27f00c7df278a7abbfe863808] <==
	I1017 20:12:55.724230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:12:55.732601       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:12:55.732651       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:12:55.735346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:55.741750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:12:55.741946       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:12:55.742122       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-563805_53665ba1-11f1-450f-9055-b3f7f5f4d946!
	I1017 20:12:55.742108       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7cbc6369-13f8-42ff-8d5e-a08248991cf2", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-563805_53665ba1-11f1-450f-9055-b3f7f5f4d946 became leader
	W1017 20:12:55.744558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:55.749043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:12:55.843254       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-563805_53665ba1-11f1-450f-9055-b3f7f5f4d946!
	W1017 20:12:57.752535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:57.756824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:59.760625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:12:59.766946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:01.775069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:01.783896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:03.790147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:03.796350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:05.800315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:05.805012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:07.808625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:07.814512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:09.819015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:09.829701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-563805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-051488 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-051488 --alsologtostderr -v=1: exit status 80 (1.733693979s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-051488 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:14:00.741298  410721 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:14:00.741562  410721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:14:00.741570  410721 out.go:374] Setting ErrFile to fd 2...
	I1017 20:14:00.741574  410721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:14:00.741809  410721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:14:00.742062  410721 out.go:368] Setting JSON to false
	I1017 20:14:00.742113  410721 mustload.go:65] Loading cluster: embed-certs-051488
	I1017 20:14:00.742447  410721 config.go:182] Loaded profile config "embed-certs-051488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:00.742912  410721 cli_runner.go:164] Run: docker container inspect embed-certs-051488 --format={{.State.Status}}
	I1017 20:14:00.761900  410721 host.go:66] Checking if "embed-certs-051488" exists ...
	I1017 20:14:00.762281  410721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:14:00.824565  410721 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 20:14:00.813593124 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:14:00.825248  410721 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-051488 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:14:00.827850  410721 out.go:179] * Pausing node embed-certs-051488 ... 
	I1017 20:14:00.829228  410721 host.go:66] Checking if "embed-certs-051488" exists ...
	I1017 20:14:00.829505  410721 ssh_runner.go:195] Run: systemctl --version
	I1017 20:14:00.829544  410721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-051488
	I1017 20:14:00.847757  410721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/embed-certs-051488/id_rsa Username:docker}
	I1017 20:14:00.944116  410721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:14:00.957646  410721 pause.go:52] kubelet running: true
	I1017 20:14:00.957720  410721 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:14:01.129626  410721 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:14:01.129731  410721 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:14:01.200716  410721 cri.go:89] found id: "3c57fa6d89c0b59a810362081ee84b1bd7cda2168f28b703f844483a10a796ab"
	I1017 20:14:01.200774  410721 cri.go:89] found id: "4fc977badd37b631fffe234d4d78fa83b65352d8cb445378af3c8a93dc85bef5"
	I1017 20:14:01.200781  410721 cri.go:89] found id: "3eea8fc63f7454fa42560a9280bcad28b308b8a750fd423c60efbc5605f8ac6e"
	I1017 20:14:01.200786  410721 cri.go:89] found id: "d9de89a3b6ad82a5ddbbb684792758c6451c6e1c975da3a18a2b3b8a791cdc89"
	I1017 20:14:01.200790  410721 cri.go:89] found id: "49e7ffb1962fab3caba55242c34213a2dad909b04dfe3f3a834dde0b028a70b6"
	I1017 20:14:01.200798  410721 cri.go:89] found id: "4ae72c1607614926b75d0ad07975052274e878ae11cbacdc162e4c68994d3524"
	I1017 20:14:01.200801  410721 cri.go:89] found id: "97ca4527b2004f03f6c41282bd4a923be38affabd40d6736b36d1e0fe5072144"
	I1017 20:14:01.200803  410721 cri.go:89] found id: "c5ba1fcfcc5d70d455f9fdd910e6a22b090cf04195eb355cc7bed4064b708ae3"
	I1017 20:14:01.200805  410721 cri.go:89] found id: "9544f431ca492c974165fabe4c6d006e40ae3fcecf8c5b140a370ddfe7fc6447"
	I1017 20:14:01.200819  410721 cri.go:89] found id: "878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef"
	I1017 20:14:01.200825  410721 cri.go:89] found id: "591c0cf97c3dfa030e2cbd5dd65036ac54db823bcde7ded3a5dbdeedd3743984"
	I1017 20:14:01.200827  410721 cri.go:89] found id: ""
	I1017 20:14:01.200879  410721 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:14:01.213549  410721 retry.go:31] will retry after 132.809803ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:14:01Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:14:01.347029  410721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:14:01.360910  410721 pause.go:52] kubelet running: false
	I1017 20:14:01.361000  410721 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:14:01.528261  410721 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:14:01.528375  410721 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:14:01.601037  410721 cri.go:89] found id: "3c57fa6d89c0b59a810362081ee84b1bd7cda2168f28b703f844483a10a796ab"
	I1017 20:14:01.601060  410721 cri.go:89] found id: "4fc977badd37b631fffe234d4d78fa83b65352d8cb445378af3c8a93dc85bef5"
	I1017 20:14:01.601065  410721 cri.go:89] found id: "3eea8fc63f7454fa42560a9280bcad28b308b8a750fd423c60efbc5605f8ac6e"
	I1017 20:14:01.601070  410721 cri.go:89] found id: "d9de89a3b6ad82a5ddbbb684792758c6451c6e1c975da3a18a2b3b8a791cdc89"
	I1017 20:14:01.601074  410721 cri.go:89] found id: "49e7ffb1962fab3caba55242c34213a2dad909b04dfe3f3a834dde0b028a70b6"
	I1017 20:14:01.601079  410721 cri.go:89] found id: "4ae72c1607614926b75d0ad07975052274e878ae11cbacdc162e4c68994d3524"
	I1017 20:14:01.601093  410721 cri.go:89] found id: "97ca4527b2004f03f6c41282bd4a923be38affabd40d6736b36d1e0fe5072144"
	I1017 20:14:01.601097  410721 cri.go:89] found id: "c5ba1fcfcc5d70d455f9fdd910e6a22b090cf04195eb355cc7bed4064b708ae3"
	I1017 20:14:01.601101  410721 cri.go:89] found id: "9544f431ca492c974165fabe4c6d006e40ae3fcecf8c5b140a370ddfe7fc6447"
	I1017 20:14:01.601136  410721 cri.go:89] found id: "878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef"
	I1017 20:14:01.601144  410721 cri.go:89] found id: "591c0cf97c3dfa030e2cbd5dd65036ac54db823bcde7ded3a5dbdeedd3743984"
	I1017 20:14:01.601148  410721 cri.go:89] found id: ""
	I1017 20:14:01.601195  410721 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:14:01.615227  410721 retry.go:31] will retry after 526.190199ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:14:01Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:14:02.141906  410721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:14:02.156876  410721 pause.go:52] kubelet running: false
	I1017 20:14:02.156939  410721 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:14:02.322235  410721 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:14:02.322362  410721 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:14:02.398400  410721 cri.go:89] found id: "3c57fa6d89c0b59a810362081ee84b1bd7cda2168f28b703f844483a10a796ab"
	I1017 20:14:02.398425  410721 cri.go:89] found id: "4fc977badd37b631fffe234d4d78fa83b65352d8cb445378af3c8a93dc85bef5"
	I1017 20:14:02.398429  410721 cri.go:89] found id: "3eea8fc63f7454fa42560a9280bcad28b308b8a750fd423c60efbc5605f8ac6e"
	I1017 20:14:02.398432  410721 cri.go:89] found id: "d9de89a3b6ad82a5ddbbb684792758c6451c6e1c975da3a18a2b3b8a791cdc89"
	I1017 20:14:02.398435  410721 cri.go:89] found id: "49e7ffb1962fab3caba55242c34213a2dad909b04dfe3f3a834dde0b028a70b6"
	I1017 20:14:02.398438  410721 cri.go:89] found id: "4ae72c1607614926b75d0ad07975052274e878ae11cbacdc162e4c68994d3524"
	I1017 20:14:02.398441  410721 cri.go:89] found id: "97ca4527b2004f03f6c41282bd4a923be38affabd40d6736b36d1e0fe5072144"
	I1017 20:14:02.398443  410721 cri.go:89] found id: "c5ba1fcfcc5d70d455f9fdd910e6a22b090cf04195eb355cc7bed4064b708ae3"
	I1017 20:14:02.398446  410721 cri.go:89] found id: "9544f431ca492c974165fabe4c6d006e40ae3fcecf8c5b140a370ddfe7fc6447"
	I1017 20:14:02.398451  410721 cri.go:89] found id: "878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef"
	I1017 20:14:02.398455  410721 cri.go:89] found id: "591c0cf97c3dfa030e2cbd5dd65036ac54db823bcde7ded3a5dbdeedd3743984"
	I1017 20:14:02.398459  410721 cri.go:89] found id: ""
	I1017 20:14:02.398503  410721 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:14:02.413516  410721 out.go:203] 
	W1017 20:14:02.414918  410721 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:14:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:14:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:14:02.414936  410721 out.go:285] * 
	* 
	W1017 20:14:02.419458  410721 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:14:02.420545  410721 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-051488 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-051488
helpers_test.go:243: (dbg) docker inspect embed-certs-051488:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9",
	        "Created": "2025-10-17T20:11:58.181534777Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 396120,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:13:04.325340027Z",
	            "FinishedAt": "2025-10-17T20:13:03.373771388Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/hosts",
	        "LogPath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9-json.log",
	        "Name": "/embed-certs-051488",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-051488:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-051488",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9",
	                "LowerDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89/merged",
	                "UpperDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89/diff",
	                "WorkDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-051488",
	                "Source": "/var/lib/docker/volumes/embed-certs-051488/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-051488",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-051488",
	                "name.minikube.sigs.k8s.io": "embed-certs-051488",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "935247c2b6abd4d68c4ec038fc232d8734710a06bdf90754c5f0df051e9724d6",
	            "SandboxKey": "/var/run/docker/netns/935247c2b6ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33209"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33210"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33213"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33211"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33212"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-051488": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:0c:04:aa:53:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f65906aaca8cabced2699549a6acf35f9aee8c707d1ca3ba4422f5bcdf4982c0",
	                    "EndpointID": "48ff3722de84576045a629fa0564a896c6af6989b59ebae0df2038054a0a5c69",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-051488",
	                        "8985127eaa32"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-051488 -n embed-certs-051488
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-051488 -n embed-certs-051488: exit status 2 (345.227936ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-051488 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-051488 logs -n 25: (1.320155808s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-051488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ stop    │ -p embed-certs-051488 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:13 UTC │
	│ addons  │ enable metrics-server -p newest-cni-051083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ stop    │ -p newest-cni-051083 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-051083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:13 UTC │
	│ addons  │ enable dashboard -p embed-certs-051488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ image   │ newest-cni-051083 image list --format=json                                                                                                                                                                                                    │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ pause   │ -p newest-cni-051083 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-563805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-563805 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p newest-cni-051083                                                                                                                                                                                                                          │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p newest-cni-051083                                                                                                                                                                                                                          │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p cert-options-318223 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-660693    │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-660693    │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-563805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ ssh     │ cert-options-318223 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ ssh     │ -p cert-options-318223 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p cert-options-318223                                                                                                                                                                                                                        │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p auto-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-684669                  │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ image   │ embed-certs-051488 image list --format=json                                                                                                                                                                                                   │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ pause   │ -p embed-certs-051488 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:13:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:13:42.855350  407971 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:13:42.855661  407971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:42.855672  407971 out.go:374] Setting ErrFile to fd 2...
	I1017 20:13:42.855675  407971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:42.855953  407971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:13:42.856573  407971 out.go:368] Setting JSON to false
	I1017 20:13:42.858335  407971 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6971,"bootTime":1760725052,"procs":448,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:13:42.858450  407971 start.go:141] virtualization: kvm guest
	I1017 20:13:42.860517  407971 out.go:179] * [auto-684669] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:13:42.862071  407971 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:13:42.862079  407971 notify.go:220] Checking for updates...
	I1017 20:13:42.864943  407971 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:13:42.866189  407971 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:13:42.867532  407971 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:13:42.868929  407971 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:13:42.870319  407971 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:13:42.872413  407971 config.go:182] Loaded profile config "default-k8s-diff-port-563805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:42.872498  407971 config.go:182] Loaded profile config "embed-certs-051488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:42.872583  407971 config.go:182] Loaded profile config "kubernetes-upgrade-660693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:42.872687  407971 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:13:42.897535  407971 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:13:42.897646  407971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:13:42.962108  407971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-17 20:13:42.950866524 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:13:42.962210  407971 docker.go:318] overlay module found
	I1017 20:13:42.964501  407971 out.go:179] * Using the docker driver based on user configuration
	I1017 20:13:42.966213  407971 start.go:305] selected driver: docker
	I1017 20:13:42.966248  407971 start.go:925] validating driver "docker" against <nil>
	I1017 20:13:42.966265  407971 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:13:42.966885  407971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:13:43.028227  407971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-17 20:13:43.017832442 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:13:43.028399  407971 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:13:43.028632  407971 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:13:43.031262  407971 out.go:179] * Using Docker driver with root privileges
	I1017 20:13:43.032968  407971 cni.go:84] Creating CNI manager for ""
	I1017 20:13:43.033054  407971 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:13:43.033066  407971 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:13:43.033147  407971 start.go:349] cluster config:
	{Name:auto-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1017 20:13:43.034864  407971 out.go:179] * Starting "auto-684669" primary control-plane node in "auto-684669" cluster
	I1017 20:13:43.036235  407971 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:13:43.037608  407971 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:13:43.039203  407971 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:13:43.039261  407971 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:13:43.039274  407971 cache.go:58] Caching tarball of preloaded images
	I1017 20:13:43.039326  407971 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:13:43.039416  407971 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:13:43.039434  407971 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:13:43.039575  407971 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/config.json ...
	I1017 20:13:43.039603  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/config.json: {Name:mk61c4e3aaa1fc1676890341ad47c24d8e093beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:43.062022  407971 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:13:43.062050  407971 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:13:43.062068  407971 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:13:43.062100  407971 start.go:360] acquireMachinesLock for auto-684669: {Name:mk616488c2ac15954365af4978649d5629bee3e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:13:43.062233  407971 start.go:364] duration metric: took 106.751µs to acquireMachinesLock for "auto-684669"
	I1017 20:13:43.062266  407971 start.go:93] Provisioning new machine with config: &{Name:auto-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-684669 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:13:43.062361  407971 start.go:125] createHost starting for "" (driver="docker")
	W1017 20:13:40.161711  395845 pod_ready.go:104] pod "coredns-66bc5c9577-gq5dd" is not "Ready", error: <nil>
	W1017 20:13:42.162247  395845 pod_ready.go:104] pod "coredns-66bc5c9577-gq5dd" is not "Ready", error: <nil>
	I1017 20:13:40.010173  405011 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 20:13:40.014983  405011 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:13:40.015019  405011 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:13:40.509635  405011 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 20:13:40.514330  405011 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1017 20:13:40.515578  405011 api_server.go:141] control plane version: v1.34.1
	I1017 20:13:40.515621  405011 api_server.go:131] duration metric: took 1.006266723s to wait for apiserver health ...
	I1017 20:13:40.515633  405011 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:13:40.519496  405011 system_pods.go:59] 8 kube-system pods found
	I1017 20:13:40.519536  405011 system_pods.go:61] "coredns-66bc5c9577-bsp94" [bf23fe6e-8ed0-4e40-92cd-65e6940b198d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:13:40.519547  405011 system_pods.go:61] "etcd-default-k8s-diff-port-563805" [ef713db5-e896-4ffa-a845-581fce8aba91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:13:40.519555  405011 system_pods.go:61] "kindnet-gzsxs" [eeb2f556-2ec6-4874-a910-c441e7cc0770] Running
	I1017 20:13:40.519563  405011 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-563805" [b6332401-9281-4f9a-bb12-02860b0b2276] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:13:40.519573  405011 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-563805" [466703e0-7428-428a-a770-fdcd8b10d8f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:13:40.519581  405011 system_pods.go:61] "kube-proxy-g7749" [812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8] Running
	I1017 20:13:40.519590  405011 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-563805" [6a46e6a6-1cc3-420e-9183-f171d6ee3dbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:13:40.519594  405011 system_pods.go:61] "storage-provisioner" [654c455e-6dcf-46d1-8664-0c1579d0a498] Running
	I1017 20:13:40.519602  405011 system_pods.go:74] duration metric: took 3.96234ms to wait for pod list to return data ...
	I1017 20:13:40.519614  405011 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:13:40.523187  405011 default_sa.go:45] found service account: "default"
	I1017 20:13:40.523222  405011 default_sa.go:55] duration metric: took 3.601221ms for default service account to be created ...
	I1017 20:13:40.523237  405011 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:13:40.526838  405011 system_pods.go:86] 8 kube-system pods found
	I1017 20:13:40.526877  405011 system_pods.go:89] "coredns-66bc5c9577-bsp94" [bf23fe6e-8ed0-4e40-92cd-65e6940b198d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:13:40.526889  405011 system_pods.go:89] "etcd-default-k8s-diff-port-563805" [ef713db5-e896-4ffa-a845-581fce8aba91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:13:40.526897  405011 system_pods.go:89] "kindnet-gzsxs" [eeb2f556-2ec6-4874-a910-c441e7cc0770] Running
	I1017 20:13:40.526907  405011 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-563805" [b6332401-9281-4f9a-bb12-02860b0b2276] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:13:40.526921  405011 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-563805" [466703e0-7428-428a-a770-fdcd8b10d8f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:13:40.526930  405011 system_pods.go:89] "kube-proxy-g7749" [812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8] Running
	I1017 20:13:40.526939  405011 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-563805" [6a46e6a6-1cc3-420e-9183-f171d6ee3dbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:13:40.526954  405011 system_pods.go:89] "storage-provisioner" [654c455e-6dcf-46d1-8664-0c1579d0a498] Running
	I1017 20:13:40.526966  405011 system_pods.go:126] duration metric: took 3.72087ms to wait for k8s-apps to be running ...
	I1017 20:13:40.526979  405011 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:13:40.527017  405011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:13:40.541205  405011 system_svc.go:56] duration metric: took 14.214885ms WaitForService to wait for kubelet
	I1017 20:13:40.541241  405011 kubeadm.go:586] duration metric: took 3.368075726s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:13:40.541264  405011 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:13:40.544535  405011 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:13:40.544566  405011 node_conditions.go:123] node cpu capacity is 8
	I1017 20:13:40.544579  405011 node_conditions.go:105] duration metric: took 3.310003ms to run NodePressure ...
	I1017 20:13:40.544591  405011 start.go:241] waiting for startup goroutines ...
	I1017 20:13:40.544598  405011 start.go:246] waiting for cluster config update ...
	I1017 20:13:40.544608  405011 start.go:255] writing updated cluster config ...
	I1017 20:13:40.544883  405011 ssh_runner.go:195] Run: rm -f paused
	I1017 20:13:40.549171  405011 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:13:40.554060  405011 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bsp94" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:13:42.560349  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:13:44.560593  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:13:44.660870  395845 pod_ready.go:104] pod "coredns-66bc5c9577-gq5dd" is not "Ready", error: <nil>
	I1017 20:13:46.161283  395845 pod_ready.go:94] pod "coredns-66bc5c9577-gq5dd" is "Ready"
	I1017 20:13:46.161315  395845 pod_ready.go:86] duration metric: took 31.006834558s for pod "coredns-66bc5c9577-gq5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.165994  395845 pod_ready.go:83] waiting for pod "etcd-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.171659  395845 pod_ready.go:94] pod "etcd-embed-certs-051488" is "Ready"
	I1017 20:13:46.171690  395845 pod_ready.go:86] duration metric: took 5.670006ms for pod "etcd-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.174345  395845 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.179314  395845 pod_ready.go:94] pod "kube-apiserver-embed-certs-051488" is "Ready"
	I1017 20:13:46.179340  395845 pod_ready.go:86] duration metric: took 4.970841ms for pod "kube-apiserver-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.182039  395845 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.357842  395845 pod_ready.go:94] pod "kube-controller-manager-embed-certs-051488" is "Ready"
	I1017 20:13:46.357871  395845 pod_ready.go:86] duration metric: took 175.800345ms for pod "kube-controller-manager-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.559044  395845 pod_ready.go:83] waiting for pod "kube-proxy-95wmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.957860  395845 pod_ready.go:94] pod "kube-proxy-95wmw" is "Ready"
	I1017 20:13:46.957891  395845 pod_ready.go:86] duration metric: took 398.812686ms for pod "kube-proxy-95wmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:47.158932  395845 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:47.559152  395845 pod_ready.go:94] pod "kube-scheduler-embed-certs-051488" is "Ready"
	I1017 20:13:47.559185  395845 pod_ready.go:86] duration metric: took 400.218683ms for pod "kube-scheduler-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:47.559201  395845 pod_ready.go:40] duration metric: took 32.412002681s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:13:47.621222  395845 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:13:47.710484  395845 out.go:179] * Done! kubectl is now configured to use "embed-certs-051488" cluster and "default" namespace by default
	I1017 20:13:43.064686  407971 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:13:43.064940  407971 start.go:159] libmachine.API.Create for "auto-684669" (driver="docker")
	I1017 20:13:43.064976  407971 client.go:168] LocalClient.Create starting
	I1017 20:13:43.065079  407971 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem
	I1017 20:13:43.065128  407971 main.go:141] libmachine: Decoding PEM data...
	I1017 20:13:43.065152  407971 main.go:141] libmachine: Parsing certificate...
	I1017 20:13:43.065252  407971 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem
	I1017 20:13:43.065288  407971 main.go:141] libmachine: Decoding PEM data...
	I1017 20:13:43.065304  407971 main.go:141] libmachine: Parsing certificate...
	I1017 20:13:43.065694  407971 cli_runner.go:164] Run: docker network inspect auto-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:13:43.083132  407971 cli_runner.go:211] docker network inspect auto-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:13:43.083229  407971 network_create.go:284] running [docker network inspect auto-684669] to gather additional debugging logs...
	I1017 20:13:43.083279  407971 cli_runner.go:164] Run: docker network inspect auto-684669
	W1017 20:13:43.102227  407971 cli_runner.go:211] docker network inspect auto-684669 returned with exit code 1
	I1017 20:13:43.102267  407971 network_create.go:287] error running [docker network inspect auto-684669]: docker network inspect auto-684669: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-684669 not found
	I1017 20:13:43.102283  407971 network_create.go:289] output of [docker network inspect auto-684669]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-684669 not found
	
	** /stderr **
	I1017 20:13:43.102459  407971 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:13:43.124813  407971 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d34a70da1174 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:b8:c9:c3:2e:b0} reservation:<nil>}
	I1017 20:13:43.125612  407971 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-07edace58173 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f3:28:2c:52:ce} reservation:<nil>}
	I1017 20:13:43.126376  407971 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a478249e8fe7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:51:65:8d:cb:60} reservation:<nil>}
	I1017 20:13:43.127220  407971 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7ed8ef1bc0a4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:6a:98:d7:e8:28} reservation:<nil>}
	I1017 20:13:43.127648  407971 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9a4aaba57340 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:16:30:99:20:8d:be} reservation:<nil>}
	I1017 20:13:43.128556  407971 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f65906aaca8c IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ba:86:9c:15:01:28} reservation:<nil>}
	I1017 20:13:43.129455  407971 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020309b0}
	I1017 20:13:43.129480  407971 network_create.go:124] attempt to create docker network auto-684669 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1017 20:13:43.129534  407971 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-684669 auto-684669
	I1017 20:13:43.192265  407971 network_create.go:108] docker network auto-684669 192.168.103.0/24 created
	I1017 20:13:43.192296  407971 kic.go:121] calculated static IP "192.168.103.2" for the "auto-684669" container
	I1017 20:13:43.192360  407971 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:13:43.210808  407971 cli_runner.go:164] Run: docker volume create auto-684669 --label name.minikube.sigs.k8s.io=auto-684669 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:13:43.229348  407971 oci.go:103] Successfully created a docker volume auto-684669
	I1017 20:13:43.229422  407971 cli_runner.go:164] Run: docker run --rm --name auto-684669-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-684669 --entrypoint /usr/bin/test -v auto-684669:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:13:43.626952  407971 oci.go:107] Successfully prepared a docker volume auto-684669
	I1017 20:13:43.627008  407971 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:13:43.627043  407971 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:13:43.627116  407971 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-684669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 20:13:46.566084  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:13:48.590665  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	I1017 20:13:48.708237  407971 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-684669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.08107411s)
	I1017 20:13:48.708286  407971 kic.go:203] duration metric: took 5.081238416s to extract preloaded images to volume ...
	W1017 20:13:48.708423  407971 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 20:13:48.708477  407971 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 20:13:48.708533  407971 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:13:48.783220  407971 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-684669 --name auto-684669 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-684669 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-684669 --network auto-684669 --ip 192.168.103.2 --volume auto-684669:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:13:49.745164  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Running}}
	I1017 20:13:49.772680  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:13:49.795843  407971 cli_runner.go:164] Run: docker exec auto-684669 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:13:49.852141  407971 oci.go:144] the created container "auto-684669" has a running status.
	I1017 20:13:49.852181  407971 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa...
	I1017 20:13:49.939242  407971 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:13:49.974400  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:13:49.999957  407971 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:13:49.999985  407971 kic_runner.go:114] Args: [docker exec --privileged auto-684669 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:13:50.055378  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:13:50.078249  407971 machine.go:93] provisionDockerMachine start ...
	I1017 20:13:50.078370  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:50.105796  407971 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:50.106438  407971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1017 20:13:50.106468  407971 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:13:50.107544  407971 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50138->127.0.0.1:33224: read: connection reset by peer
	W1017 20:13:51.059604  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:13:53.060171  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	I1017 20:13:53.244265  407971 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-684669
	
	I1017 20:13:53.244301  407971 ubuntu.go:182] provisioning hostname "auto-684669"
	I1017 20:13:53.244379  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:53.264486  407971 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:53.264726  407971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1017 20:13:53.264757  407971 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-684669 && echo "auto-684669" | sudo tee /etc/hostname
	I1017 20:13:53.412584  407971 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-684669
	
	I1017 20:13:53.412676  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:53.430680  407971 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:53.430959  407971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1017 20:13:53.430980  407971 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-684669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-684669/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-684669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:13:53.571905  407971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:13:53.571940  407971 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:13:53.571969  407971 ubuntu.go:190] setting up certificates
	I1017 20:13:53.571981  407971 provision.go:84] configureAuth start
	I1017 20:13:53.572050  407971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-684669
	I1017 20:13:53.590768  407971 provision.go:143] copyHostCerts
	I1017 20:13:53.590829  407971 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:13:53.590837  407971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:13:53.590907  407971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:13:53.591006  407971 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:13:53.591016  407971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:13:53.591042  407971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:13:53.591111  407971 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:13:53.591119  407971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:13:53.591142  407971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:13:53.591200  407971 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.auto-684669 san=[127.0.0.1 192.168.103.2 auto-684669 localhost minikube]
	I1017 20:13:53.772877  407971 provision.go:177] copyRemoteCerts
	I1017 20:13:53.772939  407971 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:13:53.772976  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:53.791242  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:13:53.889939  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:13:53.911484  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1017 20:13:53.929896  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:13:53.948800  407971 provision.go:87] duration metric: took 376.798335ms to configureAuth
	I1017 20:13:53.948831  407971 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:13:53.949069  407971 config.go:182] Loaded profile config "auto-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:53.949198  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:53.968090  407971 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:53.968327  407971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1017 20:13:53.968344  407971 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:13:54.221455  407971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:13:54.221485  407971 machine.go:96] duration metric: took 4.143200501s to provisionDockerMachine
	I1017 20:13:54.221499  407971 client.go:171] duration metric: took 11.156512575s to LocalClient.Create
	I1017 20:13:54.221530  407971 start.go:167] duration metric: took 11.156586415s to libmachine.API.Create "auto-684669"
	I1017 20:13:54.221544  407971 start.go:293] postStartSetup for "auto-684669" (driver="docker")
	I1017 20:13:54.221562  407971 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:13:54.221641  407971 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:13:54.221695  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:54.242205  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:13:54.343840  407971 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:13:54.348105  407971 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:13:54.348141  407971 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:13:54.348156  407971 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:13:54.348223  407971 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:13:54.348314  407971 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:13:54.348415  407971 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:13:54.357092  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:13:54.379922  407971 start.go:296] duration metric: took 158.355317ms for postStartSetup
	I1017 20:13:54.380311  407971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-684669
	I1017 20:13:54.400293  407971 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/config.json ...
	I1017 20:13:54.400613  407971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:13:54.400659  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:54.418771  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:13:54.513275  407971 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:13:54.518814  407971 start.go:128] duration metric: took 11.456433869s to createHost
	I1017 20:13:54.518844  407971 start.go:83] releasing machines lock for "auto-684669", held for 11.456594142s
	I1017 20:13:54.518925  407971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-684669
	I1017 20:13:54.537599  407971 ssh_runner.go:195] Run: cat /version.json
	I1017 20:13:54.537665  407971 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:13:54.537670  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:54.537849  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:54.556755  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:13:54.557608  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:13:54.653419  407971 ssh_runner.go:195] Run: systemctl --version
	I1017 20:13:54.707164  407971 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:13:54.744527  407971 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:13:54.749860  407971 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:13:54.749945  407971 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:13:54.778387  407971 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 20:13:54.778412  407971 start.go:495] detecting cgroup driver to use...
	I1017 20:13:54.778443  407971 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:13:54.778483  407971 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:13:54.795070  407971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:13:54.808533  407971 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:13:54.808596  407971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:13:54.827118  407971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:13:54.845268  407971 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:13:54.929685  407971 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:13:55.018691  407971 docker.go:234] disabling docker service ...
	I1017 20:13:55.018792  407971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:13:55.038588  407971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:13:55.051807  407971 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:13:55.139472  407971 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:13:55.222995  407971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:13:55.236704  407971 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:13:55.251861  407971 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:13:55.251933  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.264012  407971 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:13:55.264074  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.274221  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.283756  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.293875  407971 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:13:55.303114  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.313025  407971 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.328506  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.338432  407971 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:13:55.347159  407971 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:13:55.355776  407971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:13:55.442100  407971 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:13:55.672622  407971 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:13:55.672697  407971 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:13:55.676889  407971 start.go:563] Will wait 60s for crictl version
	I1017 20:13:55.676963  407971 ssh_runner.go:195] Run: which crictl
	I1017 20:13:55.680941  407971 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:13:55.706704  407971 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:13:55.706816  407971 ssh_runner.go:195] Run: crio --version
	I1017 20:13:55.736218  407971 ssh_runner.go:195] Run: crio --version
	I1017 20:13:55.769308  407971 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:13:55.770860  407971 cli_runner.go:164] Run: docker network inspect auto-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:13:55.788651  407971 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1017 20:13:55.793207  407971 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:13:55.805607  407971 kubeadm.go:883] updating cluster {Name:auto-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:13:55.805734  407971 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:13:55.805808  407971 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:13:55.841844  407971 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:13:55.841867  407971 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:13:55.841914  407971 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:13:55.867795  407971 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:13:55.867822  407971 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:13:55.867831  407971 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1017 20:13:55.867911  407971 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-684669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:13:55.867969  407971 ssh_runner.go:195] Run: crio config
	I1017 20:13:55.915614  407971 cni.go:84] Creating CNI manager for ""
	I1017 20:13:55.915637  407971 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:13:55.915655  407971 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:13:55.915675  407971 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-684669 NodeName:auto-684669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:13:55.915817  407971 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-684669"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:13:55.915879  407971 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:13:55.924590  407971 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:13:55.924653  407971 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:13:55.932920  407971 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1017 20:13:55.946188  407971 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:13:55.962392  407971 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1017 20:13:55.975869  407971 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:13:55.979715  407971 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:13:55.990145  407971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:13:56.076893  407971 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:13:56.103429  407971 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669 for IP: 192.168.103.2
	I1017 20:13:56.103452  407971 certs.go:195] generating shared ca certs ...
	I1017 20:13:56.103472  407971 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.103634  407971 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:13:56.103702  407971 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:13:56.103717  407971 certs.go:257] generating profile certs ...
	I1017 20:13:56.103808  407971 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.key
	I1017 20:13:56.103834  407971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.crt with IP's: []
	I1017 20:13:56.203410  407971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.crt ...
	I1017 20:13:56.203444  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.crt: {Name:mkffb4d795f67dea6565d0e32106dff0d1d55f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.203618  407971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.key ...
	I1017 20:13:56.203636  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.key: {Name:mkc3f9ff4b434c1609e2281e01f0f4482110b189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.203718  407971 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key.c9ade39f
	I1017 20:13:56.203734  407971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt.c9ade39f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1017 20:13:56.349546  407971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt.c9ade39f ...
	I1017 20:13:56.349576  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt.c9ade39f: {Name:mk4501794e1a5131f1d4d33f0f907daab8c8b53d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.349760  407971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key.c9ade39f ...
	I1017 20:13:56.349775  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key.c9ade39f: {Name:mk40b8d9f3bc19deea784f507e5415d43f96c4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.349870  407971 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt.c9ade39f -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt
	I1017 20:13:56.349954  407971 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key.c9ade39f -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key
	I1017 20:13:56.350025  407971 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.key
	I1017 20:13:56.350041  407971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.crt with IP's: []
	I1017 20:13:56.512251  407971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.crt ...
	I1017 20:13:56.512288  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.crt: {Name:mka503b80a304641d4a4b7be36cf3ebf270e9365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.512494  407971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.key ...
	I1017 20:13:56.512507  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.key: {Name:mkc6769b858a1cdba7a97b901dc1168e5da207b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.512698  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:13:56.512733  407971 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:13:56.512758  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:13:56.512791  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:13:56.512815  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:13:56.512840  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:13:56.512877  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:13:56.513446  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:13:56.533709  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:13:56.552695  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:13:56.573436  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:13:56.592575  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1017 20:13:56.611866  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:13:56.630664  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:13:56.650355  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:13:56.670037  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:13:56.691476  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:13:56.710840  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:13:56.730239  407971 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:13:56.743568  407971 ssh_runner.go:195] Run: openssl version
	I1017 20:13:56.750166  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:13:56.760001  407971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:13:56.764353  407971 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:13:56.764408  407971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:13:56.801085  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:13:56.810470  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:13:56.819611  407971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:13:56.823843  407971 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:13:56.823909  407971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:13:56.860547  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:13:56.869893  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:13:56.879122  407971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:13:56.883528  407971 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:13:56.883597  407971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:13:56.923513  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:13:56.932768  407971 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:13:56.936825  407971 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:13:56.936877  407971 kubeadm.go:400] StartCluster: {Name:auto-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:13:56.936944  407971 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:13:56.937002  407971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:13:56.966072  407971 cri.go:89] found id: ""
	I1017 20:13:56.966149  407971 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:13:56.974956  407971 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:13:56.983458  407971 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:13:56.983529  407971 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:13:56.992091  407971 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:13:56.992112  407971 kubeadm.go:157] found existing configuration files:
	
	I1017 20:13:56.992160  407971 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:13:57.000449  407971 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:13:57.000505  407971 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:13:57.008332  407971 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:13:57.016371  407971 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:13:57.016443  407971 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:13:57.024176  407971 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:13:57.032596  407971 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:13:57.032652  407971 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:13:57.040546  407971 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:13:57.049336  407971 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:13:57.049391  407971 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:13:57.057949  407971 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:13:57.119193  407971 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 20:13:57.179457  407971 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1017 20:13:55.560405  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:13:58.059910  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 17 20:13:27 embed-certs-051488 crio[561]: time="2025-10-17T20:13:27.793934944Z" level=info msg="Created container 591c0cf97c3dfa030e2cbd5dd65036ac54db823bcde7ded3a5dbdeedd3743984: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xkxdm/kubernetes-dashboard" id=fa937abc-fd7c-4d1c-9202-651140ed49d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:27 embed-certs-051488 crio[561]: time="2025-10-17T20:13:27.794664554Z" level=info msg="Starting container: 591c0cf97c3dfa030e2cbd5dd65036ac54db823bcde7ded3a5dbdeedd3743984" id=534bab4c-5da1-4994-b249-0ba3de510c8d name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:13:27 embed-certs-051488 crio[561]: time="2025-10-17T20:13:27.797206335Z" level=info msg="Started container" PID=1718 containerID=591c0cf97c3dfa030e2cbd5dd65036ac54db823bcde7ded3a5dbdeedd3743984 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xkxdm/kubernetes-dashboard id=534bab4c-5da1-4994-b249-0ba3de510c8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=1850b84f3f5bcb6a307cc4b1b246f4372d2be697e1d14528a26c10eeffc35eaa
	Oct 17 20:13:40 embed-certs-051488 crio[561]: time="2025-10-17T20:13:40.986922105Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b383fe71-f6dc-41de-8c6d-de3fd04c4319 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:40 embed-certs-051488 crio[561]: time="2025-10-17T20:13:40.990529327Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bb918fe6-e312-4641-8d6a-ae259bd54dd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:40 embed-certs-051488 crio[561]: time="2025-10-17T20:13:40.992363522Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz/dashboard-metrics-scraper" id=ef89cf3b-b89d-45ec-b9ed-bad85d525240 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:40 embed-certs-051488 crio[561]: time="2025-10-17T20:13:40.992691011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.001556393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.002165267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.039274391Z" level=info msg="Created container 878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz/dashboard-metrics-scraper" id=ef89cf3b-b89d-45ec-b9ed-bad85d525240 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.040010525Z" level=info msg="Starting container: 878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef" id=e3548ff1-b21f-4c01-baf3-50f21e15da8f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.04217336Z" level=info msg="Started container" PID=1737 containerID=878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz/dashboard-metrics-scraper id=e3548ff1-b21f-4c01-baf3-50f21e15da8f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ab8800de10934797640e100926166d4115150e4affc51c8924844d96b7afdd7
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.111656792Z" level=info msg="Removing container: caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c" id=16e93b95-2c03-48b6-896a-f23ff9e72126 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.12259103Z" level=info msg="Removed container caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz/dashboard-metrics-scraper" id=16e93b95-2c03-48b6-896a-f23ff9e72126 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.125070269Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=925324f7-1d33-48a6-972f-34578ab13432 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.126270639Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3e1e45ce-6ef7-4217-b706-a84b8df01c91 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.127415381Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=747dacb1-1b9f-43a4-a2d7-aa67f5c54f15 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.12775249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.133450103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.133696041Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/10f35b31be82611e8d7e4a6e9da9f27fcca80c42598495ce40e31314bfe3b7e7/merged/etc/passwd: no such file or directory"
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.133755309Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/10f35b31be82611e8d7e4a6e9da9f27fcca80c42598495ce40e31314bfe3b7e7/merged/etc/group: no such file or directory"
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.134078278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.169029064Z" level=info msg="Created container 3c57fa6d89c0b59a810362081ee84b1bd7cda2168f28b703f844483a10a796ab: kube-system/storage-provisioner/storage-provisioner" id=747dacb1-1b9f-43a4-a2d7-aa67f5c54f15 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.17011674Z" level=info msg="Starting container: 3c57fa6d89c0b59a810362081ee84b1bd7cda2168f28b703f844483a10a796ab" id=00f14a0d-edde-4ef1-9d2b-3a96ae396026 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.172681573Z" level=info msg="Started container" PID=1751 containerID=3c57fa6d89c0b59a810362081ee84b1bd7cda2168f28b703f844483a10a796ab description=kube-system/storage-provisioner/storage-provisioner id=00f14a0d-edde-4ef1-9d2b-3a96ae396026 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c77b9a75149eda4c0a082043ffc497dc7101e25ad08d910c2a139f81c324b1c0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3c57fa6d89c0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   c77b9a75149ed       storage-provisioner                          kube-system
	878fe33e8cde0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   3ab8800de1093       dashboard-metrics-scraper-6ffb444bf9-qpfxz   kubernetes-dashboard
	591c0cf97c3df       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   1850b84f3f5bc       kubernetes-dashboard-855c9754f9-xkxdm        kubernetes-dashboard
	3de88292c408f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   165d2b5f3ac23       busybox                                      default
	4fc977badd37b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   cc431ac6c9d2f       coredns-66bc5c9577-gq5dd                     kube-system
	3eea8fc63f745       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   c77b9a75149ed       storage-provisioner                          kube-system
	d9de89a3b6ad8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   6be124a550353       kindnet-rzd8h                                kube-system
	49e7ffb1962fa       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   8e778eef29342       kube-proxy-95wmw                             kube-system
	4ae72c1607614       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   b22f2187033cd       kube-controller-manager-embed-certs-051488   kube-system
	97ca4527b2004       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   3177a97e031d3       kube-scheduler-embed-certs-051488            kube-system
	c5ba1fcfcc5d7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           52 seconds ago      Running             etcd                        0                   f44607053d2df       etcd-embed-certs-051488                      kube-system
	9544f431ca492       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           52 seconds ago      Running             kube-apiserver              0                   7eed7b8478c96       kube-apiserver-embed-certs-051488            kube-system
	
	
	==> coredns [4fc977badd37b631fffe234d4d78fa83b65352d8cb445378af3c8a93dc85bef5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55481 - 38988 "HINFO IN 5188716728688465565.2489256334780365536. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074540885s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-051488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-051488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=embed-certs-051488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_12_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:12:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-051488
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:13:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:13:54 +0000   Fri, 17 Oct 2025 20:12:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:13:54 +0000   Fri, 17 Oct 2025 20:12:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:13:54 +0000   Fri, 17 Oct 2025 20:12:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:13:54 +0000   Fri, 17 Oct 2025 20:12:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-051488
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                9303a0d5-fdd2-44db-b000-32ff1975a9e6
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-gq5dd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-embed-certs-051488                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-rzd8h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-embed-certs-051488             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-051488    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-95wmw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-embed-certs-051488             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qpfxz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xkxdm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node embed-certs-051488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node embed-certs-051488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node embed-certs-051488 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node embed-certs-051488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node embed-certs-051488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node embed-certs-051488 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node embed-certs-051488 event: Registered Node embed-certs-051488 in Controller
	  Normal  NodeReady                92s                  kubelet          Node embed-certs-051488 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 53s)    kubelet          Node embed-certs-051488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 53s)    kubelet          Node embed-certs-051488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 53s)    kubelet          Node embed-certs-051488 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                  node-controller  Node embed-certs-051488 event: Registered Node embed-certs-051488 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [c5ba1fcfcc5d70d455f9fdd910e6a22b090cf04195eb355cc7bed4064b708ae3] <==
	{"level":"warn","ts":"2025-10-17T20:13:12.994532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.005166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.013886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.031004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.039684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.047325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.056662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.066325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.074926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.084132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.092587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.100652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.107479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.121265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.128874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.139068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.204444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35932","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:13:19.089918Z","caller":"traceutil/trace.go:172","msg":"trace[527764849] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"197.482001ms","start":"2025-10-17T20:13:18.892409Z","end":"2025-10-17T20:13:19.089891Z","steps":["trace[527764849] 'process raft request'  (duration: 146.625026ms)","trace[527764849] 'compare'  (duration: 50.692477ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:19.225425Z","caller":"traceutil/trace.go:172","msg":"trace[537623989] transaction","detail":"{read_only:false; response_revision:550; number_of_response:1; }","duration":"132.050012ms","start":"2025-10-17T20:13:19.093354Z","end":"2025-10-17T20:13:19.225404Z","steps":["trace[537623989] 'process raft request'  (duration: 128.522543ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:13:19.357784Z","caller":"traceutil/trace.go:172","msg":"trace[2002606308] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"129.307561ms","start":"2025-10-17T20:13:19.228461Z","end":"2025-10-17T20:13:19.357768Z","steps":["trace[2002606308] 'process raft request'  (duration: 125.559085ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:13:19.483563Z","caller":"traceutil/trace.go:172","msg":"trace[1209562223] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"117.009766ms","start":"2025-10-17T20:13:19.366528Z","end":"2025-10-17T20:13:19.483537Z","steps":["trace[1209562223] 'process raft request'  (duration: 96.490431ms)","trace[1209562223] 'compare'  (duration: 20.398271ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:19.686031Z","caller":"traceutil/trace.go:172","msg":"trace[2033147254] transaction","detail":"{read_only:false; response_revision:555; number_of_response:1; }","duration":"141.656213ms","start":"2025-10-17T20:13:19.544346Z","end":"2025-10-17T20:13:19.686002Z","steps":["trace[2033147254] 'process raft request'  (duration: 122.065407ms)","trace[2033147254] 'compare'  (duration: 19.49393ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:19.852499Z","caller":"traceutil/trace.go:172","msg":"trace[1282501817] transaction","detail":"{read_only:false; response_revision:557; number_of_response:1; }","duration":"142.810454ms","start":"2025-10-17T20:13:19.709669Z","end":"2025-10-17T20:13:19.852479Z","steps":["trace[1282501817] 'process raft request'  (duration: 123.446355ms)","trace[1282501817] 'compare'  (duration: 19.262782ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:20.169369Z","caller":"traceutil/trace.go:172","msg":"trace[1840583260] transaction","detail":"{read_only:false; response_revision:562; number_of_response:1; }","duration":"234.248745ms","start":"2025-10-17T20:13:19.935075Z","end":"2025-10-17T20:13:20.169323Z","steps":["trace[1840583260] 'process raft request'  (duration: 141.616611ms)","trace[1840583260] 'compare'  (duration: 92.436187ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:46.726821Z","caller":"traceutil/trace.go:172","msg":"trace[258077766] transaction","detail":"{read_only:false; response_revision:673; number_of_response:1; }","duration":"110.869235ms","start":"2025-10-17T20:13:46.615914Z","end":"2025-10-17T20:13:46.726783Z","steps":["trace[258077766] 'process raft request'  (duration: 51.346041ms)","trace[258077766] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz; req_size:4791; } (duration: 59.310887ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:14:03 up  1:56,  0 user,  load average: 5.36, 4.83, 3.03
	Linux embed-certs-051488 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d9de89a3b6ad82a5ddbbb684792758c6451c6e1c975da3a18a2b3b8a791cdc89] <==
	I1017 20:13:14.689516       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:13:14.689827       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1017 20:13:14.690006       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:13:14.690032       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:13:14.690059       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:13:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:13:14.897732       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:13:14.897853       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:13:14.897868       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:13:15.050224       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:13:15.398914       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:13:15.398956       1 metrics.go:72] Registering metrics
	I1017 20:13:15.399031       1 controller.go:711] "Syncing nftables rules"
	I1017 20:13:24.897876       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:13:24.897968       1 main.go:301] handling current node
	I1017 20:13:34.902831       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:13:34.902876       1 main.go:301] handling current node
	I1017 20:13:44.897843       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:13:44.897880       1 main.go:301] handling current node
	I1017 20:13:54.897461       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:13:54.897489       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9544f431ca492c974165fabe4c6d006e40ae3fcecf8c5b140a370ddfe7fc6447] <==
	I1017 20:13:13.814367       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:13:13.815390       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:13:13.814662       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:13:13.818799       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 20:13:13.821897       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 20:13:13.821947       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:13:13.822130       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:13:13.822140       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:13:13.822148       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:13:13.822153       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:13:13.829345       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:13:13.829381       1 policy_source.go:240] refreshing policies
	I1017 20:13:13.829912       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:13:13.856335       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:13:14.094514       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:13:14.176406       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:13:14.220115       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:13:14.246843       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:13:14.257249       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:13:14.356349       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.99.139"}
	I1017 20:13:14.412296       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.9.53"}
	I1017 20:13:14.715909       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:13:17.189639       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:13:17.639321       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:13:17.689484       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4ae72c1607614926b75d0ad07975052274e878ae11cbacdc162e4c68994d3524] <==
	I1017 20:13:17.135771       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:13:17.135908       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:13:17.136031       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:13:17.136027       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-051488"
	I1017 20:13:17.136092       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:13:17.136313       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:13:17.136321       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:13:17.136332       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:13:17.137440       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:13:17.138675       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:13:17.141871       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:13:17.141902       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 20:13:17.141961       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:13:17.142019       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:13:17.142029       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:13:17.142037       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:13:17.143301       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:13:17.143312       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:13:17.144471       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:13:17.147816       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:13:17.154126       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:13:17.158430       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:13:17.160766       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:13:17.164031       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:13:17.164051       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [49e7ffb1962fab3caba55242c34213a2dad909b04dfe3f3a834dde0b028a70b6] <==
	I1017 20:13:14.483446       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:13:14.555018       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:13:14.656254       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:13:14.656299       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1017 20:13:14.656406       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:13:14.678933       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:13:14.679012       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:13:14.685793       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:13:14.686246       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:13:14.686275       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:14.687399       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:13:14.687418       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:13:14.687448       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:13:14.687457       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:13:14.687510       1 config.go:200] "Starting service config controller"
	I1017 20:13:14.688056       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:13:14.687604       1 config.go:309] "Starting node config controller"
	I1017 20:13:14.688117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:13:14.688123       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:13:14.788233       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:13:14.788274       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 20:13:14.788254       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [97ca4527b2004f03f6c41282bd4a923be38affabd40d6736b36d1e0fe5072144] <==
	I1017 20:13:13.052844       1 serving.go:386] Generated self-signed cert in-memory
	W1017 20:13:13.770584       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:13:13.770622       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:13:13.770635       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:13:13.770646       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:13:13.810116       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:13:13.810160       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:13.813973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:13.814076       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:13.815258       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:13:13.815395       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:13:13.914316       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:13:17 embed-certs-051488 kubelet[712]: E1017 20:13:17.519204     712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52db9f43-c27f-4ced-bad4-085de15d48d2-kube-api-access-9596z podName:52db9f43-c27f-4ced-bad4-085de15d48d2 nodeName:}" failed. No retries permitted until 2025-10-17 20:13:18.01916464 +0000 UTC m=+7.139076887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9596z" (UniqueName: "kubernetes.io/projected/52db9f43-c27f-4ced-bad4-085de15d48d2-kube-api-access-9596z") pod "kubernetes-dashboard-855c9754f9-xkxdm" (UID: "52db9f43-c27f-4ced-bad4-085de15d48d2") : configmap "kube-root-ca.crt" not found
	Oct 17 20:13:17 embed-certs-051488 kubelet[712]: E1017 20:13:17.519260     712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9343ca71-9f7f-4503-b2b8-cbce5e2021f1-kube-api-access-2jkjj podName:9343ca71-9f7f-4503-b2b8-cbce5e2021f1 nodeName:}" failed. No retries permitted until 2025-10-17 20:13:18.019240717 +0000 UTC m=+7.139152960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2jkjj" (UniqueName: "kubernetes.io/projected/9343ca71-9f7f-4503-b2b8-cbce5e2021f1-kube-api-access-2jkjj") pod "dashboard-metrics-scraper-6ffb444bf9-qpfxz" (UID: "9343ca71-9f7f-4503-b2b8-cbce5e2021f1") : configmap "kube-root-ca.crt" not found
	Oct 17 20:13:23 embed-certs-051488 kubelet[712]: I1017 20:13:23.059993     712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podStartSLOduration=1.492372659 podStartE2EDuration="6.059967112s" podCreationTimestamp="2025-10-17 20:13:17 +0000 UTC" firstStartedPulling="2025-10-17 20:13:18.40757879 +0000 UTC m=+7.527491037" lastFinishedPulling="2025-10-17 20:13:22.975173206 +0000 UTC m=+12.095085490" observedRunningTime="2025-10-17 20:13:23.059523272 +0000 UTC m=+12.179435520" watchObservedRunningTime="2025-10-17 20:13:23.059967112 +0000 UTC m=+12.179879359"
	Oct 17 20:13:24 embed-certs-051488 kubelet[712]: I1017 20:13:24.050525     712 scope.go:117] "RemoveContainer" containerID="5ef7f1711c9827b71b4ef77ce47e981582ecd7c08ffa3349e76f1bf759be745c"
	Oct 17 20:13:25 embed-certs-051488 kubelet[712]: I1017 20:13:25.056248     712 scope.go:117] "RemoveContainer" containerID="5ef7f1711c9827b71b4ef77ce47e981582ecd7c08ffa3349e76f1bf759be745c"
	Oct 17 20:13:25 embed-certs-051488 kubelet[712]: I1017 20:13:25.056443     712 scope.go:117] "RemoveContainer" containerID="caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c"
	Oct 17 20:13:25 embed-certs-051488 kubelet[712]: E1017 20:13:25.056649     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:13:26 embed-certs-051488 kubelet[712]: I1017 20:13:26.061557     712 scope.go:117] "RemoveContainer" containerID="caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c"
	Oct 17 20:13:26 embed-certs-051488 kubelet[712]: E1017 20:13:26.061735     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:13:27 embed-certs-051488 kubelet[712]: I1017 20:13:27.065030     712 scope.go:117] "RemoveContainer" containerID="caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c"
	Oct 17 20:13:27 embed-certs-051488 kubelet[712]: E1017 20:13:27.065291     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:13:28 embed-certs-051488 kubelet[712]: I1017 20:13:28.081468     712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xkxdm" podStartSLOduration=1.762870025 podStartE2EDuration="11.081447337s" podCreationTimestamp="2025-10-17 20:13:17 +0000 UTC" firstStartedPulling="2025-10-17 20:13:18.427958911 +0000 UTC m=+7.547871136" lastFinishedPulling="2025-10-17 20:13:27.746536222 +0000 UTC m=+16.866448448" observedRunningTime="2025-10-17 20:13:28.081121425 +0000 UTC m=+17.201033672" watchObservedRunningTime="2025-10-17 20:13:28.081447337 +0000 UTC m=+17.201359583"
	Oct 17 20:13:40 embed-certs-051488 kubelet[712]: I1017 20:13:40.986231     712 scope.go:117] "RemoveContainer" containerID="caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c"
	Oct 17 20:13:41 embed-certs-051488 kubelet[712]: I1017 20:13:41.110385     712 scope.go:117] "RemoveContainer" containerID="caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c"
	Oct 17 20:13:41 embed-certs-051488 kubelet[712]: I1017 20:13:41.110661     712 scope.go:117] "RemoveContainer" containerID="878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef"
	Oct 17 20:13:41 embed-certs-051488 kubelet[712]: E1017 20:13:41.110897     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:13:45 embed-certs-051488 kubelet[712]: I1017 20:13:45.124560     712 scope.go:117] "RemoveContainer" containerID="3eea8fc63f7454fa42560a9280bcad28b308b8a750fd423c60efbc5605f8ac6e"
	Oct 17 20:13:46 embed-certs-051488 kubelet[712]: I1017 20:13:46.609181     712 scope.go:117] "RemoveContainer" containerID="878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef"
	Oct 17 20:13:46 embed-certs-051488 kubelet[712]: E1017 20:13:46.609400     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:14:00 embed-certs-051488 kubelet[712]: I1017 20:14:00.986678     712 scope.go:117] "RemoveContainer" containerID="878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef"
	Oct 17 20:14:00 embed-certs-051488 kubelet[712]: E1017 20:14:00.986910     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:14:01 embed-certs-051488 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:14:01 embed-certs-051488 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:14:01 embed-certs-051488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 20:14:01 embed-certs-051488 systemd[1]: kubelet.service: Consumed 1.844s CPU time.
	
	
	==> kubernetes-dashboard [591c0cf97c3dfa030e2cbd5dd65036ac54db823bcde7ded3a5dbdeedd3743984] <==
	2025/10/17 20:13:27 Starting overwatch
	2025/10/17 20:13:27 Using namespace: kubernetes-dashboard
	2025/10/17 20:13:27 Using in-cluster config to connect to apiserver
	2025/10/17 20:13:27 Using secret token for csrf signing
	2025/10/17 20:13:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:13:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:13:27 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:13:27 Generating JWE encryption key
	2025/10/17 20:13:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:13:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:13:28 Initializing JWE encryption key from synchronized object
	2025/10/17 20:13:28 Creating in-cluster Sidecar client
	2025/10/17 20:13:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:13:28 Serving insecurely on HTTP port: 9090
	2025/10/17 20:13:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3c57fa6d89c0b59a810362081ee84b1bd7cda2168f28b703f844483a10a796ab] <==
	I1017 20:13:45.187906       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:13:45.198432       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:13:45.198554       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:13:45.201965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:48.657937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:52.918631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:56.517380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:59.570872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:02.594149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:02.599467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:14:02.599629       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:14:02.599695       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c795daa1-3cc4-4dc8-b9fb-3eec5780324d", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-051488_9da4204f-6530-443d-b8f9-b63cc80b35e6 became leader
	I1017 20:14:02.599806       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-051488_9da4204f-6530-443d-b8f9-b63cc80b35e6!
	W1017 20:14:02.602303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:02.606298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:14:02.701024       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-051488_9da4204f-6530-443d-b8f9-b63cc80b35e6!
	
	
	==> storage-provisioner [3eea8fc63f7454fa42560a9280bcad28b308b8a750fd423c60efbc5605f8ac6e] <==
	I1017 20:13:14.439672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:13:44.445163       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-051488 -n embed-certs-051488
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-051488 -n embed-certs-051488: exit status 2 (341.136726ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-051488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-051488
helpers_test.go:243: (dbg) docker inspect embed-certs-051488:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9",
	        "Created": "2025-10-17T20:11:58.181534777Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 396120,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:13:04.325340027Z",
	            "FinishedAt": "2025-10-17T20:13:03.373771388Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/hosts",
	        "LogPath": "/var/lib/docker/containers/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9/8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9-json.log",
	        "Name": "/embed-certs-051488",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-051488:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-051488",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8985127eaa32ba972683af7230e2ff162898287924b216dfdb6d5e07757027e9",
	                "LowerDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89/merged",
	                "UpperDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89/diff",
	                "WorkDir": "/var/lib/docker/overlay2/684b82987b68d7135a27ad8b5cf1b32e9c1320900d7e0bc08bfd98a435c63c89/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-051488",
	                "Source": "/var/lib/docker/volumes/embed-certs-051488/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-051488",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-051488",
	                "name.minikube.sigs.k8s.io": "embed-certs-051488",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "935247c2b6abd4d68c4ec038fc232d8734710a06bdf90754c5f0df051e9724d6",
	            "SandboxKey": "/var/run/docker/netns/935247c2b6ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33209"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33210"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33213"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33211"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33212"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-051488": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:0c:04:aa:53:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f65906aaca8cabced2699549a6acf35f9aee8c707d1ca3ba4422f5bcdf4982c0",
	                    "EndpointID": "48ff3722de84576045a629fa0564a896c6af6989b59ebae0df2038054a0a5c69",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-051488",
	                        "8985127eaa32"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-051488 -n embed-certs-051488
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-051488 -n embed-certs-051488: exit status 2 (335.49547ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-051488 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-051488 logs -n 25: (1.264376703s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-051488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ stop    │ -p embed-certs-051488 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:13 UTC │
	│ addons  │ enable metrics-server -p newest-cni-051083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │                     │
	│ stop    │ -p newest-cni-051083 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-051083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:12 UTC │
	│ start   │ -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:12 UTC │ 17 Oct 25 20:13 UTC │
	│ addons  │ enable dashboard -p embed-certs-051488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ image   │ newest-cni-051083 image list --format=json                                                                                                                                                                                                    │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ pause   │ -p newest-cni-051083 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-563805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-563805 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p newest-cni-051083                                                                                                                                                                                                                          │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p newest-cni-051083                                                                                                                                                                                                                          │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p cert-options-318223 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-660693    │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-660693    │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-563805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ ssh     │ cert-options-318223 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ ssh     │ -p cert-options-318223 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p cert-options-318223                                                                                                                                                                                                                        │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p auto-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-684669                  │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ image   │ embed-certs-051488 image list --format=json                                                                                                                                                                                                   │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ pause   │ -p embed-certs-051488 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:13:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:13:42.855350  407971 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:13:42.855661  407971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:42.855672  407971 out.go:374] Setting ErrFile to fd 2...
	I1017 20:13:42.855675  407971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:13:42.855953  407971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:13:42.856573  407971 out.go:368] Setting JSON to false
	I1017 20:13:42.858335  407971 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6971,"bootTime":1760725052,"procs":448,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:13:42.858450  407971 start.go:141] virtualization: kvm guest
	I1017 20:13:42.860517  407971 out.go:179] * [auto-684669] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:13:42.862071  407971 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:13:42.862079  407971 notify.go:220] Checking for updates...
	I1017 20:13:42.864943  407971 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:13:42.866189  407971 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:13:42.867532  407971 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:13:42.868929  407971 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:13:42.870319  407971 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:13:42.872413  407971 config.go:182] Loaded profile config "default-k8s-diff-port-563805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:42.872498  407971 config.go:182] Loaded profile config "embed-certs-051488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:42.872583  407971 config.go:182] Loaded profile config "kubernetes-upgrade-660693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:42.872687  407971 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:13:42.897535  407971 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:13:42.897646  407971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:13:42.962108  407971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-17 20:13:42.950866524 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:13:42.962210  407971 docker.go:318] overlay module found
	I1017 20:13:42.964501  407971 out.go:179] * Using the docker driver based on user configuration
	I1017 20:13:42.966213  407971 start.go:305] selected driver: docker
	I1017 20:13:42.966248  407971 start.go:925] validating driver "docker" against <nil>
	I1017 20:13:42.966265  407971 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:13:42.966885  407971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:13:43.028227  407971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-17 20:13:43.017832442 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:13:43.028399  407971 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:13:43.028632  407971 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:13:43.031262  407971 out.go:179] * Using Docker driver with root privileges
	I1017 20:13:43.032968  407971 cni.go:84] Creating CNI manager for ""
	I1017 20:13:43.033054  407971 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:13:43.033066  407971 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:13:43.033147  407971 start.go:349] cluster config:
	{Name:auto-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1017 20:13:43.034864  407971 out.go:179] * Starting "auto-684669" primary control-plane node in "auto-684669" cluster
	I1017 20:13:43.036235  407971 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:13:43.037608  407971 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:13:43.039203  407971 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:13:43.039261  407971 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:13:43.039274  407971 cache.go:58] Caching tarball of preloaded images
	I1017 20:13:43.039326  407971 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:13:43.039416  407971 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:13:43.039434  407971 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:13:43.039575  407971 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/config.json ...
	I1017 20:13:43.039603  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/config.json: {Name:mk61c4e3aaa1fc1676890341ad47c24d8e093beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:43.062022  407971 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:13:43.062050  407971 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:13:43.062068  407971 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:13:43.062100  407971 start.go:360] acquireMachinesLock for auto-684669: {Name:mk616488c2ac15954365af4978649d5629bee3e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:13:43.062233  407971 start.go:364] duration metric: took 106.751µs to acquireMachinesLock for "auto-684669"
	I1017 20:13:43.062266  407971 start.go:93] Provisioning new machine with config: &{Name:auto-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-684669 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:13:43.062361  407971 start.go:125] createHost starting for "" (driver="docker")
	W1017 20:13:40.161711  395845 pod_ready.go:104] pod "coredns-66bc5c9577-gq5dd" is not "Ready", error: <nil>
	W1017 20:13:42.162247  395845 pod_ready.go:104] pod "coredns-66bc5c9577-gq5dd" is not "Ready", error: <nil>
	I1017 20:13:40.010173  405011 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 20:13:40.014983  405011 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:13:40.015019  405011 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:13:40.509635  405011 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 20:13:40.514330  405011 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1017 20:13:40.515578  405011 api_server.go:141] control plane version: v1.34.1
	I1017 20:13:40.515621  405011 api_server.go:131] duration metric: took 1.006266723s to wait for apiserver health ...
	I1017 20:13:40.515633  405011 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:13:40.519496  405011 system_pods.go:59] 8 kube-system pods found
	I1017 20:13:40.519536  405011 system_pods.go:61] "coredns-66bc5c9577-bsp94" [bf23fe6e-8ed0-4e40-92cd-65e6940b198d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:13:40.519547  405011 system_pods.go:61] "etcd-default-k8s-diff-port-563805" [ef713db5-e896-4ffa-a845-581fce8aba91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:13:40.519555  405011 system_pods.go:61] "kindnet-gzsxs" [eeb2f556-2ec6-4874-a910-c441e7cc0770] Running
	I1017 20:13:40.519563  405011 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-563805" [b6332401-9281-4f9a-bb12-02860b0b2276] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:13:40.519573  405011 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-563805" [466703e0-7428-428a-a770-fdcd8b10d8f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:13:40.519581  405011 system_pods.go:61] "kube-proxy-g7749" [812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8] Running
	I1017 20:13:40.519590  405011 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-563805" [6a46e6a6-1cc3-420e-9183-f171d6ee3dbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:13:40.519594  405011 system_pods.go:61] "storage-provisioner" [654c455e-6dcf-46d1-8664-0c1579d0a498] Running
	I1017 20:13:40.519602  405011 system_pods.go:74] duration metric: took 3.96234ms to wait for pod list to return data ...
	I1017 20:13:40.519614  405011 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:13:40.523187  405011 default_sa.go:45] found service account: "default"
	I1017 20:13:40.523222  405011 default_sa.go:55] duration metric: took 3.601221ms for default service account to be created ...
	I1017 20:13:40.523237  405011 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:13:40.526838  405011 system_pods.go:86] 8 kube-system pods found
	I1017 20:13:40.526877  405011 system_pods.go:89] "coredns-66bc5c9577-bsp94" [bf23fe6e-8ed0-4e40-92cd-65e6940b198d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:13:40.526889  405011 system_pods.go:89] "etcd-default-k8s-diff-port-563805" [ef713db5-e896-4ffa-a845-581fce8aba91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:13:40.526897  405011 system_pods.go:89] "kindnet-gzsxs" [eeb2f556-2ec6-4874-a910-c441e7cc0770] Running
	I1017 20:13:40.526907  405011 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-563805" [b6332401-9281-4f9a-bb12-02860b0b2276] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:13:40.526921  405011 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-563805" [466703e0-7428-428a-a770-fdcd8b10d8f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:13:40.526930  405011 system_pods.go:89] "kube-proxy-g7749" [812ff08f-93ab-4a35-bf0c-2aabb5d4b9b8] Running
	I1017 20:13:40.526939  405011 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-563805" [6a46e6a6-1cc3-420e-9183-f171d6ee3dbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:13:40.526954  405011 system_pods.go:89] "storage-provisioner" [654c455e-6dcf-46d1-8664-0c1579d0a498] Running
	I1017 20:13:40.526966  405011 system_pods.go:126] duration metric: took 3.72087ms to wait for k8s-apps to be running ...
	I1017 20:13:40.526979  405011 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:13:40.527017  405011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:13:40.541205  405011 system_svc.go:56] duration metric: took 14.214885ms WaitForService to wait for kubelet
	I1017 20:13:40.541241  405011 kubeadm.go:586] duration metric: took 3.368075726s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:13:40.541264  405011 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:13:40.544535  405011 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:13:40.544566  405011 node_conditions.go:123] node cpu capacity is 8
	I1017 20:13:40.544579  405011 node_conditions.go:105] duration metric: took 3.310003ms to run NodePressure ...
	I1017 20:13:40.544591  405011 start.go:241] waiting for startup goroutines ...
	I1017 20:13:40.544598  405011 start.go:246] waiting for cluster config update ...
	I1017 20:13:40.544608  405011 start.go:255] writing updated cluster config ...
	I1017 20:13:40.544883  405011 ssh_runner.go:195] Run: rm -f paused
	I1017 20:13:40.549171  405011 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:13:40.554060  405011 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bsp94" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:13:42.560349  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:13:44.560593  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:13:44.660870  395845 pod_ready.go:104] pod "coredns-66bc5c9577-gq5dd" is not "Ready", error: <nil>
	I1017 20:13:46.161283  395845 pod_ready.go:94] pod "coredns-66bc5c9577-gq5dd" is "Ready"
	I1017 20:13:46.161315  395845 pod_ready.go:86] duration metric: took 31.006834558s for pod "coredns-66bc5c9577-gq5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.165994  395845 pod_ready.go:83] waiting for pod "etcd-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.171659  395845 pod_ready.go:94] pod "etcd-embed-certs-051488" is "Ready"
	I1017 20:13:46.171690  395845 pod_ready.go:86] duration metric: took 5.670006ms for pod "etcd-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.174345  395845 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.179314  395845 pod_ready.go:94] pod "kube-apiserver-embed-certs-051488" is "Ready"
	I1017 20:13:46.179340  395845 pod_ready.go:86] duration metric: took 4.970841ms for pod "kube-apiserver-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.182039  395845 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.357842  395845 pod_ready.go:94] pod "kube-controller-manager-embed-certs-051488" is "Ready"
	I1017 20:13:46.357871  395845 pod_ready.go:86] duration metric: took 175.800345ms for pod "kube-controller-manager-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.559044  395845 pod_ready.go:83] waiting for pod "kube-proxy-95wmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:46.957860  395845 pod_ready.go:94] pod "kube-proxy-95wmw" is "Ready"
	I1017 20:13:46.957891  395845 pod_ready.go:86] duration metric: took 398.812686ms for pod "kube-proxy-95wmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:47.158932  395845 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:47.559152  395845 pod_ready.go:94] pod "kube-scheduler-embed-certs-051488" is "Ready"
	I1017 20:13:47.559185  395845 pod_ready.go:86] duration metric: took 400.218683ms for pod "kube-scheduler-embed-certs-051488" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:13:47.559201  395845 pod_ready.go:40] duration metric: took 32.412002681s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:13:47.621222  395845 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:13:47.710484  395845 out.go:179] * Done! kubectl is now configured to use "embed-certs-051488" cluster and "default" namespace by default
	I1017 20:13:43.064686  407971 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:13:43.064940  407971 start.go:159] libmachine.API.Create for "auto-684669" (driver="docker")
	I1017 20:13:43.064976  407971 client.go:168] LocalClient.Create starting
	I1017 20:13:43.065079  407971 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem
	I1017 20:13:43.065128  407971 main.go:141] libmachine: Decoding PEM data...
	I1017 20:13:43.065152  407971 main.go:141] libmachine: Parsing certificate...
	I1017 20:13:43.065252  407971 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem
	I1017 20:13:43.065288  407971 main.go:141] libmachine: Decoding PEM data...
	I1017 20:13:43.065304  407971 main.go:141] libmachine: Parsing certificate...
	I1017 20:13:43.065694  407971 cli_runner.go:164] Run: docker network inspect auto-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:13:43.083132  407971 cli_runner.go:211] docker network inspect auto-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:13:43.083229  407971 network_create.go:284] running [docker network inspect auto-684669] to gather additional debugging logs...
	I1017 20:13:43.083279  407971 cli_runner.go:164] Run: docker network inspect auto-684669
	W1017 20:13:43.102227  407971 cli_runner.go:211] docker network inspect auto-684669 returned with exit code 1
	I1017 20:13:43.102267  407971 network_create.go:287] error running [docker network inspect auto-684669]: docker network inspect auto-684669: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-684669 not found
	I1017 20:13:43.102283  407971 network_create.go:289] output of [docker network inspect auto-684669]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-684669 not found
	
	** /stderr **
	I1017 20:13:43.102459  407971 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:13:43.124813  407971 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d34a70da1174 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:b8:c9:c3:2e:b0} reservation:<nil>}
	I1017 20:13:43.125612  407971 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-07edace58173 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f3:28:2c:52:ce} reservation:<nil>}
	I1017 20:13:43.126376  407971 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a478249e8fe7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:51:65:8d:cb:60} reservation:<nil>}
	I1017 20:13:43.127220  407971 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7ed8ef1bc0a4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:6a:98:d7:e8:28} reservation:<nil>}
	I1017 20:13:43.127648  407971 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9a4aaba57340 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:16:30:99:20:8d:be} reservation:<nil>}
	I1017 20:13:43.128556  407971 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f65906aaca8c IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ba:86:9c:15:01:28} reservation:<nil>}
	I1017 20:13:43.129455  407971 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020309b0}
	I1017 20:13:43.129480  407971 network_create.go:124] attempt to create docker network auto-684669 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1017 20:13:43.129534  407971 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-684669 auto-684669
	I1017 20:13:43.192265  407971 network_create.go:108] docker network auto-684669 192.168.103.0/24 created
	I1017 20:13:43.192296  407971 kic.go:121] calculated static IP "192.168.103.2" for the "auto-684669" container
	I1017 20:13:43.192360  407971 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:13:43.210808  407971 cli_runner.go:164] Run: docker volume create auto-684669 --label name.minikube.sigs.k8s.io=auto-684669 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:13:43.229348  407971 oci.go:103] Successfully created a docker volume auto-684669
	I1017 20:13:43.229422  407971 cli_runner.go:164] Run: docker run --rm --name auto-684669-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-684669 --entrypoint /usr/bin/test -v auto-684669:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:13:43.626952  407971 oci.go:107] Successfully prepared a docker volume auto-684669
	I1017 20:13:43.627008  407971 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:13:43.627043  407971 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:13:43.627116  407971 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-684669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 20:13:46.566084  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:13:48.590665  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	I1017 20:13:48.708237  407971 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-684669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.08107411s)
	I1017 20:13:48.708286  407971 kic.go:203] duration metric: took 5.081238416s to extract preloaded images to volume ...
	W1017 20:13:48.708423  407971 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 20:13:48.708477  407971 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 20:13:48.708533  407971 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:13:48.783220  407971 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-684669 --name auto-684669 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-684669 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-684669 --network auto-684669 --ip 192.168.103.2 --volume auto-684669:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:13:49.745164  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Running}}
	I1017 20:13:49.772680  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:13:49.795843  407971 cli_runner.go:164] Run: docker exec auto-684669 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:13:49.852141  407971 oci.go:144] the created container "auto-684669" has a running status.
	I1017 20:13:49.852181  407971 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa...
	I1017 20:13:49.939242  407971 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:13:49.974400  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:13:49.999957  407971 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:13:49.999985  407971 kic_runner.go:114] Args: [docker exec --privileged auto-684669 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:13:50.055378  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:13:50.078249  407971 machine.go:93] provisionDockerMachine start ...
	I1017 20:13:50.078370  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:50.105796  407971 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:50.106438  407971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1017 20:13:50.106468  407971 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:13:50.107544  407971 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50138->127.0.0.1:33224: read: connection reset by peer
	W1017 20:13:51.059604  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:13:53.060171  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	I1017 20:13:53.244265  407971 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-684669
	
	I1017 20:13:53.244301  407971 ubuntu.go:182] provisioning hostname "auto-684669"
	I1017 20:13:53.244379  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:53.264486  407971 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:53.264726  407971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1017 20:13:53.264757  407971 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-684669 && echo "auto-684669" | sudo tee /etc/hostname
	I1017 20:13:53.412584  407971 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-684669
	
	I1017 20:13:53.412676  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:53.430680  407971 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:53.430959  407971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1017 20:13:53.430980  407971 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-684669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-684669/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-684669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:13:53.571905  407971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:13:53.571940  407971 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:13:53.571969  407971 ubuntu.go:190] setting up certificates
	I1017 20:13:53.571981  407971 provision.go:84] configureAuth start
	I1017 20:13:53.572050  407971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-684669
	I1017 20:13:53.590768  407971 provision.go:143] copyHostCerts
	I1017 20:13:53.590829  407971 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:13:53.590837  407971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:13:53.590907  407971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:13:53.591006  407971 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:13:53.591016  407971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:13:53.591042  407971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:13:53.591111  407971 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:13:53.591119  407971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:13:53.591142  407971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:13:53.591200  407971 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.auto-684669 san=[127.0.0.1 192.168.103.2 auto-684669 localhost minikube]
	I1017 20:13:53.772877  407971 provision.go:177] copyRemoteCerts
	I1017 20:13:53.772939  407971 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:13:53.772976  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:53.791242  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:13:53.889939  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:13:53.911484  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1017 20:13:53.929896  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:13:53.948800  407971 provision.go:87] duration metric: took 376.798335ms to configureAuth
	I1017 20:13:53.948831  407971 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:13:53.949069  407971 config.go:182] Loaded profile config "auto-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:13:53.949198  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:53.968090  407971 main.go:141] libmachine: Using SSH client type: native
	I1017 20:13:53.968327  407971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1017 20:13:53.968344  407971 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:13:54.221455  407971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:13:54.221485  407971 machine.go:96] duration metric: took 4.143200501s to provisionDockerMachine
	I1017 20:13:54.221499  407971 client.go:171] duration metric: took 11.156512575s to LocalClient.Create
	I1017 20:13:54.221530  407971 start.go:167] duration metric: took 11.156586415s to libmachine.API.Create "auto-684669"
	I1017 20:13:54.221544  407971 start.go:293] postStartSetup for "auto-684669" (driver="docker")
	I1017 20:13:54.221562  407971 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:13:54.221641  407971 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:13:54.221695  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:54.242205  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:13:54.343840  407971 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:13:54.348105  407971 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:13:54.348141  407971 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:13:54.348156  407971 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:13:54.348223  407971 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:13:54.348314  407971 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:13:54.348415  407971 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:13:54.357092  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:13:54.379922  407971 start.go:296] duration metric: took 158.355317ms for postStartSetup
	I1017 20:13:54.380311  407971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-684669
	I1017 20:13:54.400293  407971 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/config.json ...
	I1017 20:13:54.400613  407971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:13:54.400659  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:54.418771  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:13:54.513275  407971 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:13:54.518814  407971 start.go:128] duration metric: took 11.456433869s to createHost
	I1017 20:13:54.518844  407971 start.go:83] releasing machines lock for "auto-684669", held for 11.456594142s
	I1017 20:13:54.518925  407971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-684669
	I1017 20:13:54.537599  407971 ssh_runner.go:195] Run: cat /version.json
	I1017 20:13:54.537665  407971 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:13:54.537670  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:54.537849  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:13:54.556755  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:13:54.557608  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:13:54.653419  407971 ssh_runner.go:195] Run: systemctl --version
	I1017 20:13:54.707164  407971 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:13:54.744527  407971 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:13:54.749860  407971 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:13:54.749945  407971 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:13:54.778387  407971 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 20:13:54.778412  407971 start.go:495] detecting cgroup driver to use...
	I1017 20:13:54.778443  407971 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:13:54.778483  407971 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:13:54.795070  407971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:13:54.808533  407971 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:13:54.808596  407971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:13:54.827118  407971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:13:54.845268  407971 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:13:54.929685  407971 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:13:55.018691  407971 docker.go:234] disabling docker service ...
	I1017 20:13:55.018792  407971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:13:55.038588  407971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:13:55.051807  407971 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:13:55.139472  407971 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:13:55.222995  407971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:13:55.236704  407971 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:13:55.251861  407971 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:13:55.251933  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.264012  407971 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:13:55.264074  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.274221  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.283756  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.293875  407971 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:13:55.303114  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.313025  407971 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.328506  407971 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:13:55.338432  407971 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:13:55.347159  407971 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:13:55.355776  407971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:13:55.442100  407971 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:13:55.672622  407971 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:13:55.672697  407971 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:13:55.676889  407971 start.go:563] Will wait 60s for crictl version
	I1017 20:13:55.676963  407971 ssh_runner.go:195] Run: which crictl
	I1017 20:13:55.680941  407971 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:13:55.706704  407971 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:13:55.706816  407971 ssh_runner.go:195] Run: crio --version
	I1017 20:13:55.736218  407971 ssh_runner.go:195] Run: crio --version
	I1017 20:13:55.769308  407971 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:13:55.770860  407971 cli_runner.go:164] Run: docker network inspect auto-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:13:55.788651  407971 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1017 20:13:55.793207  407971 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:13:55.805607  407971 kubeadm.go:883] updating cluster {Name:auto-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:13:55.805734  407971 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:13:55.805808  407971 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:13:55.841844  407971 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:13:55.841867  407971 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:13:55.841914  407971 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:13:55.867795  407971 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:13:55.867822  407971 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:13:55.867831  407971 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1017 20:13:55.867911  407971 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-684669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:13:55.867969  407971 ssh_runner.go:195] Run: crio config
	I1017 20:13:55.915614  407971 cni.go:84] Creating CNI manager for ""
	I1017 20:13:55.915637  407971 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:13:55.915655  407971 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:13:55.915675  407971 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-684669 NodeName:auto-684669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:13:55.915817  407971 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-684669"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:13:55.915879  407971 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:13:55.924590  407971 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:13:55.924653  407971 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:13:55.932920  407971 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1017 20:13:55.946188  407971 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:13:55.962392  407971 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1017 20:13:55.975869  407971 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:13:55.979715  407971 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:13:55.990145  407971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:13:56.076893  407971 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:13:56.103429  407971 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669 for IP: 192.168.103.2
	I1017 20:13:56.103452  407971 certs.go:195] generating shared ca certs ...
	I1017 20:13:56.103472  407971 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.103634  407971 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:13:56.103702  407971 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:13:56.103717  407971 certs.go:257] generating profile certs ...
	I1017 20:13:56.103808  407971 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.key
	I1017 20:13:56.103834  407971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.crt with IP's: []
	I1017 20:13:56.203410  407971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.crt ...
	I1017 20:13:56.203444  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.crt: {Name:mkffb4d795f67dea6565d0e32106dff0d1d55f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.203618  407971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.key ...
	I1017 20:13:56.203636  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/client.key: {Name:mkc3f9ff4b434c1609e2281e01f0f4482110b189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.203718  407971 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key.c9ade39f
	I1017 20:13:56.203734  407971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt.c9ade39f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1017 20:13:56.349546  407971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt.c9ade39f ...
	I1017 20:13:56.349576  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt.c9ade39f: {Name:mk4501794e1a5131f1d4d33f0f907daab8c8b53d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.349760  407971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key.c9ade39f ...
	I1017 20:13:56.349775  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key.c9ade39f: {Name:mk40b8d9f3bc19deea784f507e5415d43f96c4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.349870  407971 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt.c9ade39f -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt
	I1017 20:13:56.349954  407971 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key.c9ade39f -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key
	I1017 20:13:56.350025  407971 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.key
	I1017 20:13:56.350041  407971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.crt with IP's: []
	I1017 20:13:56.512251  407971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.crt ...
	I1017 20:13:56.512288  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.crt: {Name:mka503b80a304641d4a4b7be36cf3ebf270e9365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.512494  407971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.key ...
	I1017 20:13:56.512507  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.key: {Name:mkc6769b858a1cdba7a97b901dc1168e5da207b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:13:56.512698  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:13:56.512733  407971 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:13:56.512758  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:13:56.512791  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:13:56.512815  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:13:56.512840  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:13:56.512877  407971 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:13:56.513446  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:13:56.533709  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:13:56.552695  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:13:56.573436  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:13:56.592575  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1017 20:13:56.611866  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:13:56.630664  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:13:56.650355  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/auto-684669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:13:56.670037  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:13:56.691476  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:13:56.710840  407971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:13:56.730239  407971 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:13:56.743568  407971 ssh_runner.go:195] Run: openssl version
	I1017 20:13:56.750166  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:13:56.760001  407971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:13:56.764353  407971 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:13:56.764408  407971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:13:56.801085  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:13:56.810470  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:13:56.819611  407971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:13:56.823843  407971 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:13:56.823909  407971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:13:56.860547  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:13:56.869893  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:13:56.879122  407971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:13:56.883528  407971 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:13:56.883597  407971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:13:56.923513  407971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:13:56.932768  407971 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:13:56.936825  407971 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:13:56.936877  407971 kubeadm.go:400] StartCluster: {Name:auto-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:13:56.936944  407971 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:13:56.937002  407971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:13:56.966072  407971 cri.go:89] found id: ""
	I1017 20:13:56.966149  407971 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:13:56.974956  407971 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:13:56.983458  407971 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:13:56.983529  407971 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:13:56.992091  407971 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:13:56.992112  407971 kubeadm.go:157] found existing configuration files:
	
	I1017 20:13:56.992160  407971 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:13:57.000449  407971 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:13:57.000505  407971 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:13:57.008332  407971 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:13:57.016371  407971 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:13:57.016443  407971 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:13:57.024176  407971 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:13:57.032596  407971 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:13:57.032652  407971 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:13:57.040546  407971 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:13:57.049336  407971 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:13:57.049391  407971 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:13:57.057949  407971 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:13:57.119193  407971 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 20:13:57.179457  407971 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1017 20:13:55.560405  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:13:58.059910  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:14:00.060464  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:14:02.060902  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	W1017 20:14:04.561954  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 17 20:13:27 embed-certs-051488 crio[561]: time="2025-10-17T20:13:27.793934944Z" level=info msg="Created container 591c0cf97c3dfa030e2cbd5dd65036ac54db823bcde7ded3a5dbdeedd3743984: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xkxdm/kubernetes-dashboard" id=fa937abc-fd7c-4d1c-9202-651140ed49d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:27 embed-certs-051488 crio[561]: time="2025-10-17T20:13:27.794664554Z" level=info msg="Starting container: 591c0cf97c3dfa030e2cbd5dd65036ac54db823bcde7ded3a5dbdeedd3743984" id=534bab4c-5da1-4994-b249-0ba3de510c8d name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:13:27 embed-certs-051488 crio[561]: time="2025-10-17T20:13:27.797206335Z" level=info msg="Started container" PID=1718 containerID=591c0cf97c3dfa030e2cbd5dd65036ac54db823bcde7ded3a5dbdeedd3743984 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xkxdm/kubernetes-dashboard id=534bab4c-5da1-4994-b249-0ba3de510c8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=1850b84f3f5bcb6a307cc4b1b246f4372d2be697e1d14528a26c10eeffc35eaa
	Oct 17 20:13:40 embed-certs-051488 crio[561]: time="2025-10-17T20:13:40.986922105Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b383fe71-f6dc-41de-8c6d-de3fd04c4319 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:40 embed-certs-051488 crio[561]: time="2025-10-17T20:13:40.990529327Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bb918fe6-e312-4641-8d6a-ae259bd54dd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:40 embed-certs-051488 crio[561]: time="2025-10-17T20:13:40.992363522Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz/dashboard-metrics-scraper" id=ef89cf3b-b89d-45ec-b9ed-bad85d525240 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:40 embed-certs-051488 crio[561]: time="2025-10-17T20:13:40.992691011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.001556393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.002165267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.039274391Z" level=info msg="Created container 878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz/dashboard-metrics-scraper" id=ef89cf3b-b89d-45ec-b9ed-bad85d525240 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.040010525Z" level=info msg="Starting container: 878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef" id=e3548ff1-b21f-4c01-baf3-50f21e15da8f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.04217336Z" level=info msg="Started container" PID=1737 containerID=878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz/dashboard-metrics-scraper id=e3548ff1-b21f-4c01-baf3-50f21e15da8f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ab8800de10934797640e100926166d4115150e4affc51c8924844d96b7afdd7
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.111656792Z" level=info msg="Removing container: caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c" id=16e93b95-2c03-48b6-896a-f23ff9e72126 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:13:41 embed-certs-051488 crio[561]: time="2025-10-17T20:13:41.12259103Z" level=info msg="Removed container caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz/dashboard-metrics-scraper" id=16e93b95-2c03-48b6-896a-f23ff9e72126 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.125070269Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=925324f7-1d33-48a6-972f-34578ab13432 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.126270639Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3e1e45ce-6ef7-4217-b706-a84b8df01c91 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.127415381Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=747dacb1-1b9f-43a4-a2d7-aa67f5c54f15 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.12775249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.133450103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.133696041Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/10f35b31be82611e8d7e4a6e9da9f27fcca80c42598495ce40e31314bfe3b7e7/merged/etc/passwd: no such file or directory"
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.133755309Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/10f35b31be82611e8d7e4a6e9da9f27fcca80c42598495ce40e31314bfe3b7e7/merged/etc/group: no such file or directory"
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.134078278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.169029064Z" level=info msg="Created container 3c57fa6d89c0b59a810362081ee84b1bd7cda2168f28b703f844483a10a796ab: kube-system/storage-provisioner/storage-provisioner" id=747dacb1-1b9f-43a4-a2d7-aa67f5c54f15 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.17011674Z" level=info msg="Starting container: 3c57fa6d89c0b59a810362081ee84b1bd7cda2168f28b703f844483a10a796ab" id=00f14a0d-edde-4ef1-9d2b-3a96ae396026 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:13:45 embed-certs-051488 crio[561]: time="2025-10-17T20:13:45.172681573Z" level=info msg="Started container" PID=1751 containerID=3c57fa6d89c0b59a810362081ee84b1bd7cda2168f28b703f844483a10a796ab description=kube-system/storage-provisioner/storage-provisioner id=00f14a0d-edde-4ef1-9d2b-3a96ae396026 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c77b9a75149eda4c0a082043ffc497dc7101e25ad08d910c2a139f81c324b1c0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3c57fa6d89c0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   c77b9a75149ed       storage-provisioner                          kube-system
	878fe33e8cde0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   3ab8800de1093       dashboard-metrics-scraper-6ffb444bf9-qpfxz   kubernetes-dashboard
	591c0cf97c3df       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   1850b84f3f5bc       kubernetes-dashboard-855c9754f9-xkxdm        kubernetes-dashboard
	3de88292c408f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   165d2b5f3ac23       busybox                                      default
	4fc977badd37b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   cc431ac6c9d2f       coredns-66bc5c9577-gq5dd                     kube-system
	3eea8fc63f745       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   c77b9a75149ed       storage-provisioner                          kube-system
	d9de89a3b6ad8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   6be124a550353       kindnet-rzd8h                                kube-system
	49e7ffb1962fa       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   8e778eef29342       kube-proxy-95wmw                             kube-system
	4ae72c1607614       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   b22f2187033cd       kube-controller-manager-embed-certs-051488   kube-system
	97ca4527b2004       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   3177a97e031d3       kube-scheduler-embed-certs-051488            kube-system
	c5ba1fcfcc5d7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   f44607053d2df       etcd-embed-certs-051488                      kube-system
	9544f431ca492       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   7eed7b8478c96       kube-apiserver-embed-certs-051488            kube-system
	
	
	==> coredns [4fc977badd37b631fffe234d4d78fa83b65352d8cb445378af3c8a93dc85bef5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55481 - 38988 "HINFO IN 5188716728688465565.2489256334780365536. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074540885s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-051488
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-051488
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=embed-certs-051488
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_12_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:12:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-051488
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:13:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:13:54 +0000   Fri, 17 Oct 2025 20:12:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:13:54 +0000   Fri, 17 Oct 2025 20:12:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:13:54 +0000   Fri, 17 Oct 2025 20:12:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:13:54 +0000   Fri, 17 Oct 2025 20:12:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-051488
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                9303a0d5-fdd2-44db-b000-32ff1975a9e6
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-gq5dd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-051488                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-rzd8h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-051488             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-051488    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-95wmw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-051488             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qpfxz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xkxdm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node embed-certs-051488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node embed-certs-051488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 117s)  kubelet          Node embed-certs-051488 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node embed-certs-051488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node embed-certs-051488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node embed-certs-051488 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node embed-certs-051488 event: Registered Node embed-certs-051488 in Controller
	  Normal  NodeReady                94s                  kubelet          Node embed-certs-051488 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)    kubelet          Node embed-certs-051488 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)    kubelet          Node embed-certs-051488 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)    kubelet          Node embed-certs-051488 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node embed-certs-051488 event: Registered Node embed-certs-051488 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [c5ba1fcfcc5d70d455f9fdd910e6a22b090cf04195eb355cc7bed4064b708ae3] <==
	{"level":"warn","ts":"2025-10-17T20:13:12.994532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.005166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.013886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.031004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.039684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.047325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.056662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.066325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.074926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.084132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.092587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.100652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.107479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.121265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.128874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.139068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:13.204444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35932","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:13:19.089918Z","caller":"traceutil/trace.go:172","msg":"trace[527764849] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"197.482001ms","start":"2025-10-17T20:13:18.892409Z","end":"2025-10-17T20:13:19.089891Z","steps":["trace[527764849] 'process raft request'  (duration: 146.625026ms)","trace[527764849] 'compare'  (duration: 50.692477ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:19.225425Z","caller":"traceutil/trace.go:172","msg":"trace[537623989] transaction","detail":"{read_only:false; response_revision:550; number_of_response:1; }","duration":"132.050012ms","start":"2025-10-17T20:13:19.093354Z","end":"2025-10-17T20:13:19.225404Z","steps":["trace[537623989] 'process raft request'  (duration: 128.522543ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:13:19.357784Z","caller":"traceutil/trace.go:172","msg":"trace[2002606308] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"129.307561ms","start":"2025-10-17T20:13:19.228461Z","end":"2025-10-17T20:13:19.357768Z","steps":["trace[2002606308] 'process raft request'  (duration: 125.559085ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:13:19.483563Z","caller":"traceutil/trace.go:172","msg":"trace[1209562223] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"117.009766ms","start":"2025-10-17T20:13:19.366528Z","end":"2025-10-17T20:13:19.483537Z","steps":["trace[1209562223] 'process raft request'  (duration: 96.490431ms)","trace[1209562223] 'compare'  (duration: 20.398271ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:19.686031Z","caller":"traceutil/trace.go:172","msg":"trace[2033147254] transaction","detail":"{read_only:false; response_revision:555; number_of_response:1; }","duration":"141.656213ms","start":"2025-10-17T20:13:19.544346Z","end":"2025-10-17T20:13:19.686002Z","steps":["trace[2033147254] 'process raft request'  (duration: 122.065407ms)","trace[2033147254] 'compare'  (duration: 19.49393ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:19.852499Z","caller":"traceutil/trace.go:172","msg":"trace[1282501817] transaction","detail":"{read_only:false; response_revision:557; number_of_response:1; }","duration":"142.810454ms","start":"2025-10-17T20:13:19.709669Z","end":"2025-10-17T20:13:19.852479Z","steps":["trace[1282501817] 'process raft request'  (duration: 123.446355ms)","trace[1282501817] 'compare'  (duration: 19.262782ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:20.169369Z","caller":"traceutil/trace.go:172","msg":"trace[1840583260] transaction","detail":"{read_only:false; response_revision:562; number_of_response:1; }","duration":"234.248745ms","start":"2025-10-17T20:13:19.935075Z","end":"2025-10-17T20:13:20.169323Z","steps":["trace[1840583260] 'process raft request'  (duration: 141.616611ms)","trace[1840583260] 'compare'  (duration: 92.436187ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:46.726821Z","caller":"traceutil/trace.go:172","msg":"trace[258077766] transaction","detail":"{read_only:false; response_revision:673; number_of_response:1; }","duration":"110.869235ms","start":"2025-10-17T20:13:46.615914Z","end":"2025-10-17T20:13:46.726783Z","steps":["trace[258077766] 'process raft request'  (duration: 51.346041ms)","trace[258077766] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz; req_size:4791; } (duration: 59.310887ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:14:05 up  1:56,  0 user,  load average: 5.17, 4.80, 3.03
	Linux embed-certs-051488 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d9de89a3b6ad82a5ddbbb684792758c6451c6e1c975da3a18a2b3b8a791cdc89] <==
	I1017 20:13:14.689516       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:13:14.689827       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1017 20:13:14.690006       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:13:14.690032       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:13:14.690059       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:13:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:13:14.897732       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:13:14.897853       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:13:14.897868       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:13:15.050224       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:13:15.398914       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:13:15.398956       1 metrics.go:72] Registering metrics
	I1017 20:13:15.399031       1 controller.go:711] "Syncing nftables rules"
	I1017 20:13:24.897876       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:13:24.897968       1 main.go:301] handling current node
	I1017 20:13:34.902831       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:13:34.902876       1 main.go:301] handling current node
	I1017 20:13:44.897843       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:13:44.897880       1 main.go:301] handling current node
	I1017 20:13:54.897461       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:13:54.897489       1 main.go:301] handling current node
	I1017 20:14:04.905888       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1017 20:14:04.905935       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9544f431ca492c974165fabe4c6d006e40ae3fcecf8c5b140a370ddfe7fc6447] <==
	I1017 20:13:13.814367       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:13:13.815390       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:13:13.814662       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:13:13.818799       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 20:13:13.821897       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 20:13:13.821947       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:13:13.822130       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:13:13.822140       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:13:13.822148       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:13:13.822153       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:13:13.829345       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:13:13.829381       1 policy_source.go:240] refreshing policies
	I1017 20:13:13.829912       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:13:13.856335       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:13:14.094514       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:13:14.176406       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:13:14.220115       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:13:14.246843       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:13:14.257249       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:13:14.356349       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.99.139"}
	I1017 20:13:14.412296       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.9.53"}
	I1017 20:13:14.715909       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:13:17.189639       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:13:17.639321       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:13:17.689484       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4ae72c1607614926b75d0ad07975052274e878ae11cbacdc162e4c68994d3524] <==
	I1017 20:13:17.135771       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:13:17.135908       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:13:17.136031       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:13:17.136027       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-051488"
	I1017 20:13:17.136092       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:13:17.136313       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:13:17.136321       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:13:17.136332       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:13:17.137440       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:13:17.138675       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:13:17.141871       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:13:17.141902       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 20:13:17.141961       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:13:17.142019       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:13:17.142029       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:13:17.142037       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:13:17.143301       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:13:17.143312       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:13:17.144471       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:13:17.147816       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:13:17.154126       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:13:17.158430       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:13:17.160766       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:13:17.164031       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:13:17.164051       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [49e7ffb1962fab3caba55242c34213a2dad909b04dfe3f3a834dde0b028a70b6] <==
	I1017 20:13:14.483446       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:13:14.555018       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:13:14.656254       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:13:14.656299       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1017 20:13:14.656406       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:13:14.678933       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:13:14.679012       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:13:14.685793       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:13:14.686246       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:13:14.686275       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:14.687399       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:13:14.687418       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:13:14.687448       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:13:14.687457       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:13:14.687510       1 config.go:200] "Starting service config controller"
	I1017 20:13:14.688056       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:13:14.687604       1 config.go:309] "Starting node config controller"
	I1017 20:13:14.688117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:13:14.688123       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:13:14.788233       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:13:14.788274       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 20:13:14.788254       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [97ca4527b2004f03f6c41282bd4a923be38affabd40d6736b36d1e0fe5072144] <==
	I1017 20:13:13.052844       1 serving.go:386] Generated self-signed cert in-memory
	W1017 20:13:13.770584       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:13:13.770622       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:13:13.770635       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:13:13.770646       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:13:13.810116       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:13:13.810160       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:13.813973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:13.814076       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:13.815258       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:13:13.815395       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:13:13.914316       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:13:17 embed-certs-051488 kubelet[712]: E1017 20:13:17.519204     712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52db9f43-c27f-4ced-bad4-085de15d48d2-kube-api-access-9596z podName:52db9f43-c27f-4ced-bad4-085de15d48d2 nodeName:}" failed. No retries permitted until 2025-10-17 20:13:18.01916464 +0000 UTC m=+7.139076887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9596z" (UniqueName: "kubernetes.io/projected/52db9f43-c27f-4ced-bad4-085de15d48d2-kube-api-access-9596z") pod "kubernetes-dashboard-855c9754f9-xkxdm" (UID: "52db9f43-c27f-4ced-bad4-085de15d48d2") : configmap "kube-root-ca.crt" not found
	Oct 17 20:13:17 embed-certs-051488 kubelet[712]: E1017 20:13:17.519260     712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9343ca71-9f7f-4503-b2b8-cbce5e2021f1-kube-api-access-2jkjj podName:9343ca71-9f7f-4503-b2b8-cbce5e2021f1 nodeName:}" failed. No retries permitted until 2025-10-17 20:13:18.019240717 +0000 UTC m=+7.139152960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2jkjj" (UniqueName: "kubernetes.io/projected/9343ca71-9f7f-4503-b2b8-cbce5e2021f1-kube-api-access-2jkjj") pod "dashboard-metrics-scraper-6ffb444bf9-qpfxz" (UID: "9343ca71-9f7f-4503-b2b8-cbce5e2021f1") : configmap "kube-root-ca.crt" not found
	Oct 17 20:13:23 embed-certs-051488 kubelet[712]: I1017 20:13:23.059993     712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podStartSLOduration=1.492372659 podStartE2EDuration="6.059967112s" podCreationTimestamp="2025-10-17 20:13:17 +0000 UTC" firstStartedPulling="2025-10-17 20:13:18.40757879 +0000 UTC m=+7.527491037" lastFinishedPulling="2025-10-17 20:13:22.975173206 +0000 UTC m=+12.095085490" observedRunningTime="2025-10-17 20:13:23.059523272 +0000 UTC m=+12.179435520" watchObservedRunningTime="2025-10-17 20:13:23.059967112 +0000 UTC m=+12.179879359"
	Oct 17 20:13:24 embed-certs-051488 kubelet[712]: I1017 20:13:24.050525     712 scope.go:117] "RemoveContainer" containerID="5ef7f1711c9827b71b4ef77ce47e981582ecd7c08ffa3349e76f1bf759be745c"
	Oct 17 20:13:25 embed-certs-051488 kubelet[712]: I1017 20:13:25.056248     712 scope.go:117] "RemoveContainer" containerID="5ef7f1711c9827b71b4ef77ce47e981582ecd7c08ffa3349e76f1bf759be745c"
	Oct 17 20:13:25 embed-certs-051488 kubelet[712]: I1017 20:13:25.056443     712 scope.go:117] "RemoveContainer" containerID="caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c"
	Oct 17 20:13:25 embed-certs-051488 kubelet[712]: E1017 20:13:25.056649     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:13:26 embed-certs-051488 kubelet[712]: I1017 20:13:26.061557     712 scope.go:117] "RemoveContainer" containerID="caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c"
	Oct 17 20:13:26 embed-certs-051488 kubelet[712]: E1017 20:13:26.061735     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:13:27 embed-certs-051488 kubelet[712]: I1017 20:13:27.065030     712 scope.go:117] "RemoveContainer" containerID="caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c"
	Oct 17 20:13:27 embed-certs-051488 kubelet[712]: E1017 20:13:27.065291     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:13:28 embed-certs-051488 kubelet[712]: I1017 20:13:28.081468     712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xkxdm" podStartSLOduration=1.762870025 podStartE2EDuration="11.081447337s" podCreationTimestamp="2025-10-17 20:13:17 +0000 UTC" firstStartedPulling="2025-10-17 20:13:18.427958911 +0000 UTC m=+7.547871136" lastFinishedPulling="2025-10-17 20:13:27.746536222 +0000 UTC m=+16.866448448" observedRunningTime="2025-10-17 20:13:28.081121425 +0000 UTC m=+17.201033672" watchObservedRunningTime="2025-10-17 20:13:28.081447337 +0000 UTC m=+17.201359583"
	Oct 17 20:13:40 embed-certs-051488 kubelet[712]: I1017 20:13:40.986231     712 scope.go:117] "RemoveContainer" containerID="caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c"
	Oct 17 20:13:41 embed-certs-051488 kubelet[712]: I1017 20:13:41.110385     712 scope.go:117] "RemoveContainer" containerID="caccc474d7e1d7b1baa69f49f9027b8adf44d056d9021a7f78a55938749ee21c"
	Oct 17 20:13:41 embed-certs-051488 kubelet[712]: I1017 20:13:41.110661     712 scope.go:117] "RemoveContainer" containerID="878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef"
	Oct 17 20:13:41 embed-certs-051488 kubelet[712]: E1017 20:13:41.110897     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:13:45 embed-certs-051488 kubelet[712]: I1017 20:13:45.124560     712 scope.go:117] "RemoveContainer" containerID="3eea8fc63f7454fa42560a9280bcad28b308b8a750fd423c60efbc5605f8ac6e"
	Oct 17 20:13:46 embed-certs-051488 kubelet[712]: I1017 20:13:46.609181     712 scope.go:117] "RemoveContainer" containerID="878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef"
	Oct 17 20:13:46 embed-certs-051488 kubelet[712]: E1017 20:13:46.609400     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:14:00 embed-certs-051488 kubelet[712]: I1017 20:14:00.986678     712 scope.go:117] "RemoveContainer" containerID="878fe33e8cde050d74f263b521b39376c65689ea8801756f6d31d461612c19ef"
	Oct 17 20:14:00 embed-certs-051488 kubelet[712]: E1017 20:14:00.986910     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qpfxz_kubernetes-dashboard(9343ca71-9f7f-4503-b2b8-cbce5e2021f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qpfxz" podUID="9343ca71-9f7f-4503-b2b8-cbce5e2021f1"
	Oct 17 20:14:01 embed-certs-051488 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:14:01 embed-certs-051488 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:14:01 embed-certs-051488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 20:14:01 embed-certs-051488 systemd[1]: kubelet.service: Consumed 1.844s CPU time.
	
	
	==> kubernetes-dashboard [591c0cf97c3dfa030e2cbd5dd65036ac54db823bcde7ded3a5dbdeedd3743984] <==
	2025/10/17 20:13:27 Starting overwatch
	2025/10/17 20:13:27 Using namespace: kubernetes-dashboard
	2025/10/17 20:13:27 Using in-cluster config to connect to apiserver
	2025/10/17 20:13:27 Using secret token for csrf signing
	2025/10/17 20:13:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:13:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:13:27 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:13:27 Generating JWE encryption key
	2025/10/17 20:13:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:13:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:13:28 Initializing JWE encryption key from synchronized object
	2025/10/17 20:13:28 Creating in-cluster Sidecar client
	2025/10/17 20:13:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:13:28 Serving insecurely on HTTP port: 9090
	2025/10/17 20:13:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3c57fa6d89c0b59a810362081ee84b1bd7cda2168f28b703f844483a10a796ab] <==
	I1017 20:13:45.187906       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:13:45.198432       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:13:45.198554       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:13:45.201965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:48.657937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:52.918631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:56.517380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:13:59.570872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:02.594149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:02.599467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:14:02.599629       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:14:02.599695       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c795daa1-3cc4-4dc8-b9fb-3eec5780324d", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-051488_9da4204f-6530-443d-b8f9-b63cc80b35e6 became leader
	I1017 20:14:02.599806       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-051488_9da4204f-6530-443d-b8f9-b63cc80b35e6!
	W1017 20:14:02.602303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:02.606298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:14:02.701024       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-051488_9da4204f-6530-443d-b8f9-b63cc80b35e6!
	W1017 20:14:04.609465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:04.613991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [3eea8fc63f7454fa42560a9280bcad28b308b8a750fd423c60efbc5605f8ac6e] <==
	I1017 20:13:14.439672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:13:44.445163       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-051488 -n embed-certs-051488
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-051488 -n embed-certs-051488: exit status 2 (361.262309ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-051488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-563805 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-563805 --alsologtostderr -v=1: exit status 80 (2.1472321s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-563805 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:14:24.789996  415875 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:14:24.790156  415875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:14:24.790166  415875 out.go:374] Setting ErrFile to fd 2...
	I1017 20:14:24.790173  415875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:14:24.790481  415875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:14:24.790834  415875 out.go:368] Setting JSON to false
	I1017 20:14:24.790903  415875 mustload.go:65] Loading cluster: default-k8s-diff-port-563805
	I1017 20:14:24.791429  415875 config.go:182] Loaded profile config "default-k8s-diff-port-563805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:24.792039  415875 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-563805 --format={{.State.Status}}
	I1017 20:14:24.812602  415875 host.go:66] Checking if "default-k8s-diff-port-563805" exists ...
	I1017 20:14:24.813011  415875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:14:24.884043  415875 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-17 20:14:24.871889936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:14:24.884888  415875 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-563805 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:14:24.887198  415875 out.go:179] * Pausing node default-k8s-diff-port-563805 ... 
	I1017 20:14:24.888768  415875 host.go:66] Checking if "default-k8s-diff-port-563805" exists ...
	I1017 20:14:24.889088  415875 ssh_runner.go:195] Run: systemctl --version
	I1017 20:14:24.889147  415875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-563805
	I1017 20:14:24.910939  415875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/default-k8s-diff-port-563805/id_rsa Username:docker}
	I1017 20:14:25.013708  415875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:14:25.030051  415875 pause.go:52] kubelet running: true
	I1017 20:14:25.030138  415875 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:14:25.244915  415875 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:14:25.245042  415875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:14:25.333382  415875 cri.go:89] found id: "a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd"
	I1017 20:14:25.333409  415875 cri.go:89] found id: "620478bbf7c357ce43fdb113d1af8b156c3f06537ebbde3f375835b749f63165"
	I1017 20:14:25.333414  415875 cri.go:89] found id: "c4a95fedc4957f2772d4188de75f2d0b0715d0ead81d66093c1bb82a882026d5"
	I1017 20:14:25.333417  415875 cri.go:89] found id: "befec0b605a11944db3aa5e1626c300e786a26bec9be6f5bef7d94439e2b74cd"
	I1017 20:14:25.333420  415875 cri.go:89] found id: "f6fedb384a1ad00b57204bbb8a84f0877c763ba980fe5fe9bdd6d9fd495b8981"
	I1017 20:14:25.333423  415875 cri.go:89] found id: "c595776216f076fd092a3194172be36c923143b82bc0c107305659b192166d72"
	I1017 20:14:25.333426  415875 cri.go:89] found id: "8b04285c222479d3b2ea10ca1123a4893d4e6350366905f40c907646a9f3259c"
	I1017 20:14:25.333429  415875 cri.go:89] found id: "3921f3f5375050e83141087f7f8ca522220b109c30ad4b4d1d6c09216bc51b9b"
	I1017 20:14:25.333431  415875 cri.go:89] found id: "304a87295c1b69a58634803b264b8f89d380003a2081fe68a13fad1c6406af7c"
	I1017 20:14:25.333438  415875 cri.go:89] found id: "d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2"
	I1017 20:14:25.333442  415875 cri.go:89] found id: "dec17f1d9027dfa31aeaa2dc6ea73f5f3ea06821f779ca9a7b446e04d0051274"
	I1017 20:14:25.333446  415875 cri.go:89] found id: ""
	I1017 20:14:25.333498  415875 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:14:25.348541  415875 retry.go:31] will retry after 147.511831ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:14:25Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:14:25.496980  415875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:14:25.513677  415875 pause.go:52] kubelet running: false
	I1017 20:14:25.513776  415875 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:14:25.677215  415875 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:14:25.677293  415875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:14:25.755847  415875 cri.go:89] found id: "a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd"
	I1017 20:14:25.755876  415875 cri.go:89] found id: "620478bbf7c357ce43fdb113d1af8b156c3f06537ebbde3f375835b749f63165"
	I1017 20:14:25.755880  415875 cri.go:89] found id: "c4a95fedc4957f2772d4188de75f2d0b0715d0ead81d66093c1bb82a882026d5"
	I1017 20:14:25.755883  415875 cri.go:89] found id: "befec0b605a11944db3aa5e1626c300e786a26bec9be6f5bef7d94439e2b74cd"
	I1017 20:14:25.755886  415875 cri.go:89] found id: "f6fedb384a1ad00b57204bbb8a84f0877c763ba980fe5fe9bdd6d9fd495b8981"
	I1017 20:14:25.755889  415875 cri.go:89] found id: "c595776216f076fd092a3194172be36c923143b82bc0c107305659b192166d72"
	I1017 20:14:25.755891  415875 cri.go:89] found id: "8b04285c222479d3b2ea10ca1123a4893d4e6350366905f40c907646a9f3259c"
	I1017 20:14:25.755894  415875 cri.go:89] found id: "3921f3f5375050e83141087f7f8ca522220b109c30ad4b4d1d6c09216bc51b9b"
	I1017 20:14:25.755896  415875 cri.go:89] found id: "304a87295c1b69a58634803b264b8f89d380003a2081fe68a13fad1c6406af7c"
	I1017 20:14:25.755914  415875 cri.go:89] found id: "d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2"
	I1017 20:14:25.755918  415875 cri.go:89] found id: "dec17f1d9027dfa31aeaa2dc6ea73f5f3ea06821f779ca9a7b446e04d0051274"
	I1017 20:14:25.755922  415875 cri.go:89] found id: ""
	I1017 20:14:25.755976  415875 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:14:25.769264  415875 retry.go:31] will retry after 226.046392ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:14:25Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:14:25.995767  415875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:14:26.009749  415875 pause.go:52] kubelet running: false
	I1017 20:14:26.009820  415875 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:14:26.187850  415875 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:14:26.188040  415875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:14:26.260761  415875 cri.go:89] found id: "a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd"
	I1017 20:14:26.260787  415875 cri.go:89] found id: "620478bbf7c357ce43fdb113d1af8b156c3f06537ebbde3f375835b749f63165"
	I1017 20:14:26.260793  415875 cri.go:89] found id: "c4a95fedc4957f2772d4188de75f2d0b0715d0ead81d66093c1bb82a882026d5"
	I1017 20:14:26.260797  415875 cri.go:89] found id: "befec0b605a11944db3aa5e1626c300e786a26bec9be6f5bef7d94439e2b74cd"
	I1017 20:14:26.260801  415875 cri.go:89] found id: "f6fedb384a1ad00b57204bbb8a84f0877c763ba980fe5fe9bdd6d9fd495b8981"
	I1017 20:14:26.260806  415875 cri.go:89] found id: "c595776216f076fd092a3194172be36c923143b82bc0c107305659b192166d72"
	I1017 20:14:26.260810  415875 cri.go:89] found id: "8b04285c222479d3b2ea10ca1123a4893d4e6350366905f40c907646a9f3259c"
	I1017 20:14:26.260814  415875 cri.go:89] found id: "3921f3f5375050e83141087f7f8ca522220b109c30ad4b4d1d6c09216bc51b9b"
	I1017 20:14:26.260827  415875 cri.go:89] found id: "304a87295c1b69a58634803b264b8f89d380003a2081fe68a13fad1c6406af7c"
	I1017 20:14:26.260836  415875 cri.go:89] found id: "d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2"
	I1017 20:14:26.260842  415875 cri.go:89] found id: "dec17f1d9027dfa31aeaa2dc6ea73f5f3ea06821f779ca9a7b446e04d0051274"
	I1017 20:14:26.260847  415875 cri.go:89] found id: ""
	I1017 20:14:26.260897  415875 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:14:26.273722  415875 retry.go:31] will retry after 330.344071ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:14:26Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:14:26.605053  415875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:14:26.619731  415875 pause.go:52] kubelet running: false
	I1017 20:14:26.619816  415875 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:14:26.772586  415875 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:14:26.772670  415875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:14:26.854501  415875 cri.go:89] found id: "a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd"
	I1017 20:14:26.854530  415875 cri.go:89] found id: "620478bbf7c357ce43fdb113d1af8b156c3f06537ebbde3f375835b749f63165"
	I1017 20:14:26.854536  415875 cri.go:89] found id: "c4a95fedc4957f2772d4188de75f2d0b0715d0ead81d66093c1bb82a882026d5"
	I1017 20:14:26.854540  415875 cri.go:89] found id: "befec0b605a11944db3aa5e1626c300e786a26bec9be6f5bef7d94439e2b74cd"
	I1017 20:14:26.854544  415875 cri.go:89] found id: "f6fedb384a1ad00b57204bbb8a84f0877c763ba980fe5fe9bdd6d9fd495b8981"
	I1017 20:14:26.854552  415875 cri.go:89] found id: "c595776216f076fd092a3194172be36c923143b82bc0c107305659b192166d72"
	I1017 20:14:26.854557  415875 cri.go:89] found id: "8b04285c222479d3b2ea10ca1123a4893d4e6350366905f40c907646a9f3259c"
	I1017 20:14:26.854561  415875 cri.go:89] found id: "3921f3f5375050e83141087f7f8ca522220b109c30ad4b4d1d6c09216bc51b9b"
	I1017 20:14:26.854565  415875 cri.go:89] found id: "304a87295c1b69a58634803b264b8f89d380003a2081fe68a13fad1c6406af7c"
	I1017 20:14:26.854573  415875 cri.go:89] found id: "d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2"
	I1017 20:14:26.854577  415875 cri.go:89] found id: "dec17f1d9027dfa31aeaa2dc6ea73f5f3ea06821f779ca9a7b446e04d0051274"
	I1017 20:14:26.854581  415875 cri.go:89] found id: ""
	I1017 20:14:26.854631  415875 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:14:26.871635  415875 out.go:203] 
	W1017 20:14:26.874717  415875 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:14:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:14:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:14:26.874756  415875 out.go:285] * 
	* 
	W1017 20:14:26.879160  415875 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:14:26.881022  415875 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-563805 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-563805
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-563805:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655",
	        "Created": "2025-10-17T20:12:20.619875365Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 405209,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:13:29.825614622Z",
	            "FinishedAt": "2025-10-17T20:13:28.922611176Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/hostname",
	        "HostsPath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/hosts",
	        "LogPath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655-json.log",
	        "Name": "/default-k8s-diff-port-563805",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-563805:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-563805",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655",
	                "LowerDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-563805",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-563805/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-563805",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-563805",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-563805",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f7647ecdd4b34d2b072af45430f3b63364239613214d283cab0e42e8e962f9ef",
	            "SandboxKey": "/var/run/docker/netns/f7647ecdd4b3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33222"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-563805": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:e8:44:42:40:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9a4aaba57340b08a6dc80d718ca509a23c5f23e099fc7d8315ee78ac47b427de",
	                    "EndpointID": "68a0dcb8aa6452726bf36a3f75517275864e7cdc241c840b238e4b34ddde6dfa",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-563805",
	                        "7567eb504598"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805
I1017 20:14:27.040541  139217 config.go:182] Loaded profile config "auto-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805: exit status 2 (379.838962ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-563805 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-563805 logs -n 25: (1.530248385s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable dashboard -p embed-certs-051488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                             │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                    │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ image   │ newest-cni-051083 image list --format=json                                                                                                                                                                                │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ pause   │ -p newest-cni-051083 --alsologtostderr -v=1                                                                                                                                                                               │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-563805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                        │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-563805 --alsologtostderr -v=3                                                                                                                                                                    │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p newest-cni-051083                                                                                                                                                                                                      │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p newest-cni-051083                                                                                                                                                                                                      │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p cert-options-318223 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-660693    │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-660693    │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-563805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                   │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                  │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:14 UTC │
	│ ssh     │ cert-options-318223 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ ssh     │ -p cert-options-318223 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p cert-options-318223                                                                                                                                                                                                    │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p auto-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                   │ auto-684669                  │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:14 UTC │
	│ image   │ embed-certs-051488 image list --format=json                                                                                                                                                                               │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ pause   │ -p embed-certs-051488 --alsologtostderr -v=1                                                                                                                                                                              │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │                     │
	│ delete  │ -p embed-certs-051488                                                                                                                                                                                                     │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ delete  │ -p embed-certs-051488                                                                                                                                                                                                     │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ start   │ -p kindnet-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                  │ kindnet-684669               │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │                     │
	│ image   │ default-k8s-diff-port-563805 image list --format=json                                                                                                                                                                     │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ pause   │ -p default-k8s-diff-port-563805 --alsologtostderr -v=1                                                                                                                                                                    │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │                     │
	│ ssh     │ -p auto-684669 pgrep -a kubelet                                                                                                                                                                                           │ auto-684669                  │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:14:09
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:14:09.670087  412924 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:14:09.670336  412924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:14:09.670344  412924 out.go:374] Setting ErrFile to fd 2...
	I1017 20:14:09.670348  412924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:14:09.670551  412924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:14:09.671070  412924 out.go:368] Setting JSON to false
	I1017 20:14:09.672353  412924 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6998,"bootTime":1760725052,"procs":429,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:14:09.672463  412924 start.go:141] virtualization: kvm guest
	I1017 20:14:09.674640  412924 out.go:179] * [kindnet-684669] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:14:09.676060  412924 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:14:09.676083  412924 notify.go:220] Checking for updates...
	I1017 20:14:09.678986  412924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:14:09.680596  412924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:14:09.682134  412924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:14:09.683631  412924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:14:09.685013  412924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:14:09.686956  412924 config.go:182] Loaded profile config "auto-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:09.687106  412924 config.go:182] Loaded profile config "default-k8s-diff-port-563805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:09.687235  412924 config.go:182] Loaded profile config "kubernetes-upgrade-660693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:09.687365  412924 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:14:09.714921  412924 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:14:09.715049  412924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:14:09.777482  412924 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:14:09.765897473 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:14:09.777650  412924 docker.go:318] overlay module found
	I1017 20:14:09.779449  412924 out.go:179] * Using the docker driver based on user configuration
	I1017 20:14:09.780904  412924 start.go:305] selected driver: docker
	I1017 20:14:09.780938  412924 start.go:925] validating driver "docker" against <nil>
	I1017 20:14:09.780956  412924 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:14:09.781614  412924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:14:09.839297  412924 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:14:09.829912051 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:14:09.839481  412924 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:14:09.839724  412924 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:14:09.841937  412924 out.go:179] * Using Docker driver with root privileges
	I1017 20:14:09.843394  412924 cni.go:84] Creating CNI manager for "kindnet"
	I1017 20:14:09.843415  412924 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:14:09.843488  412924 start.go:349] cluster config:
	{Name:kindnet-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:14:09.844928  412924 out.go:179] * Starting "kindnet-684669" primary control-plane node in "kindnet-684669" cluster
	I1017 20:14:09.846248  412924 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:14:09.847661  412924 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:14:09.849005  412924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:14:09.849072  412924 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:14:09.849085  412924 cache.go:58] Caching tarball of preloaded images
	I1017 20:14:09.849129  412924 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:14:09.849177  412924 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:14:09.849187  412924 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:14:09.849300  412924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/config.json ...
	I1017 20:14:09.849327  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/config.json: {Name:mk76cc40f98ce1fd9978a490757cb3c468f44416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:09.871651  412924 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:14:09.871677  412924 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:14:09.871699  412924 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:14:09.871731  412924 start.go:360] acquireMachinesLock for kindnet-684669: {Name:mkc6ec4425f15705bbeb59a41d5555bf1ec6bce9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:14:09.871883  412924 start.go:364] duration metric: took 99.802µs to acquireMachinesLock for "kindnet-684669"
	I1017 20:14:09.871912  412924 start.go:93] Provisioning new machine with config: &{Name:kindnet-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-684669 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:14:09.872011  412924 start.go:125] createHost starting for "" (driver="docker")
	I1017 20:14:08.030325  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:08.530979  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:09.030883  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:09.530380  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:10.031005  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:10.530403  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:11.030554  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:11.530982  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:12.031017  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:12.117396  407971 kubeadm.go:1113] duration metric: took 4.667382741s to wait for elevateKubeSystemPrivileges
	I1017 20:14:12.117469  407971 kubeadm.go:402] duration metric: took 15.180595473s to StartCluster
	I1017 20:14:12.117497  407971 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:12.117775  407971 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:14:12.119662  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:12.119978  407971 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:14:12.119976  407971 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:14:12.120063  407971 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:14:12.120180  407971 config.go:182] Loaded profile config "auto-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:12.120187  407971 addons.go:69] Setting storage-provisioner=true in profile "auto-684669"
	I1017 20:14:12.120208  407971 addons.go:238] Setting addon storage-provisioner=true in "auto-684669"
	I1017 20:14:12.120243  407971 host.go:66] Checking if "auto-684669" exists ...
	I1017 20:14:12.120232  407971 addons.go:69] Setting default-storageclass=true in profile "auto-684669"
	I1017 20:14:12.120267  407971 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-684669"
	I1017 20:14:12.120710  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:14:12.120925  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:14:12.122310  407971 out.go:179] * Verifying Kubernetes components...
	I1017 20:14:12.124586  407971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:14:12.145166  407971 addons.go:238] Setting addon default-storageclass=true in "auto-684669"
	I1017 20:14:12.145209  407971 host.go:66] Checking if "auto-684669" exists ...
	I1017 20:14:12.145787  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:14:12.149456  407971 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:14:12.151172  407971 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:14:12.151198  407971 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:14:12.151262  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:14:12.178836  407971 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:14:12.178871  407971 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:14:12.178952  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:14:12.187942  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:14:12.204934  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:14:12.222121  407971 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:14:12.281273  407971 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:14:12.318364  407971 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:14:12.328339  407971 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:14:12.436911  407971 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1017 20:14:12.438501  407971 node_ready.go:35] waiting up to 15m0s for node "auto-684669" to be "Ready" ...
	I1017 20:14:12.692306  407971 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 20:14:12.693790  407971 addons.go:514] duration metric: took 573.72582ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1017 20:14:11.061013  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	I1017 20:14:11.560899  405011 pod_ready.go:94] pod "coredns-66bc5c9577-bsp94" is "Ready"
	I1017 20:14:11.560941  405011 pod_ready.go:86] duration metric: took 31.006838817s for pod "coredns-66bc5c9577-bsp94" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.563537  405011 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.568264  405011 pod_ready.go:94] pod "etcd-default-k8s-diff-port-563805" is "Ready"
	I1017 20:14:11.568294  405011 pod_ready.go:86] duration metric: took 4.728927ms for pod "etcd-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.570685  405011 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.575344  405011 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-563805" is "Ready"
	I1017 20:14:11.575377  405011 pod_ready.go:86] duration metric: took 4.666923ms for pod "kube-apiserver-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.578056  405011 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.759271  405011 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-563805" is "Ready"
	I1017 20:14:11.759302  405011 pod_ready.go:86] duration metric: took 181.205762ms for pod "kube-controller-manager-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.958863  405011 pod_ready.go:83] waiting for pod "kube-proxy-g7749" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:12.358260  405011 pod_ready.go:94] pod "kube-proxy-g7749" is "Ready"
	I1017 20:14:12.358292  405011 pod_ready.go:86] duration metric: took 399.400355ms for pod "kube-proxy-g7749" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:12.560082  405011 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:12.957999  405011 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-563805" is "Ready"
	I1017 20:14:12.958029  405011 pod_ready.go:86] duration metric: took 397.913437ms for pod "kube-scheduler-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:12.958040  405011 pod_ready.go:40] duration metric: took 32.408827529s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:14:13.013030  405011 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:14:13.018028  405011 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-563805" cluster and "default" namespace by default
	I1017 20:14:09.874397  412924 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:14:09.874595  412924 start.go:159] libmachine.API.Create for "kindnet-684669" (driver="docker")
	I1017 20:14:09.874625  412924 client.go:168] LocalClient.Create starting
	I1017 20:14:09.874726  412924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem
	I1017 20:14:09.874791  412924 main.go:141] libmachine: Decoding PEM data...
	I1017 20:14:09.874817  412924 main.go:141] libmachine: Parsing certificate...
	I1017 20:14:09.874868  412924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem
	I1017 20:14:09.874887  412924 main.go:141] libmachine: Decoding PEM data...
	I1017 20:14:09.874897  412924 main.go:141] libmachine: Parsing certificate...
	I1017 20:14:09.875219  412924 cli_runner.go:164] Run: docker network inspect kindnet-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:14:09.893105  412924 cli_runner.go:211] docker network inspect kindnet-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:14:09.893186  412924 network_create.go:284] running [docker network inspect kindnet-684669] to gather additional debugging logs...
	I1017 20:14:09.893213  412924 cli_runner.go:164] Run: docker network inspect kindnet-684669
	W1017 20:14:09.910546  412924 cli_runner.go:211] docker network inspect kindnet-684669 returned with exit code 1
	I1017 20:14:09.910578  412924 network_create.go:287] error running [docker network inspect kindnet-684669]: docker network inspect kindnet-684669: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-684669 not found
	I1017 20:14:09.910600  412924 network_create.go:289] output of [docker network inspect kindnet-684669]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-684669 not found
	
	** /stderr **
	I1017 20:14:09.910717  412924 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:14:09.930380  412924 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d34a70da1174 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:b8:c9:c3:2e:b0} reservation:<nil>}
	I1017 20:14:09.931100  412924 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-07edace58173 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f3:28:2c:52:ce} reservation:<nil>}
	I1017 20:14:09.931858  412924 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a478249e8fe7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:51:65:8d:cb:60} reservation:<nil>}
	I1017 20:14:09.932719  412924 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7ed8ef1bc0a4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:6a:98:d7:e8:28} reservation:<nil>}
	I1017 20:14:09.933070  412924 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9a4aaba57340 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:16:30:99:20:8d:be} reservation:<nil>}
	I1017 20:14:09.933868  412924 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00208e170}
	I1017 20:14:09.933892  412924 network_create.go:124] attempt to create docker network kindnet-684669 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1017 20:14:09.933945  412924 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-684669 kindnet-684669
	I1017 20:14:09.997406  412924 network_create.go:108] docker network kindnet-684669 192.168.94.0/24 created
	I1017 20:14:09.997443  412924 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-684669" container
	I1017 20:14:09.997521  412924 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:14:10.018449  412924 cli_runner.go:164] Run: docker volume create kindnet-684669 --label name.minikube.sigs.k8s.io=kindnet-684669 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:14:10.038413  412924 oci.go:103] Successfully created a docker volume kindnet-684669
	I1017 20:14:10.038499  412924 cli_runner.go:164] Run: docker run --rm --name kindnet-684669-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-684669 --entrypoint /usr/bin/test -v kindnet-684669:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:14:10.457884  412924 oci.go:107] Successfully prepared a docker volume kindnet-684669
	I1017 20:14:10.457935  412924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:14:10.457961  412924 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:14:10.458044  412924 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-684669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 20:14:12.942197  407971 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-684669" context rescaled to 1 replicas
	W1017 20:14:14.442350  407971 node_ready.go:57] node "auto-684669" has "Ready":"False" status (will retry)
	W1017 20:14:16.941556  407971 node_ready.go:57] node "auto-684669" has "Ready":"False" status (will retry)
	I1017 20:14:15.244667  412924 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-684669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.786563684s)
	I1017 20:14:15.244697  412924 kic.go:203] duration metric: took 4.786732459s to extract preloaded images to volume ...
	W1017 20:14:15.244815  412924 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 20:14:15.244846  412924 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 20:14:15.244879  412924 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:14:15.302251  412924 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-684669 --name kindnet-684669 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-684669 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-684669 --network kindnet-684669 --ip 192.168.94.2 --volume kindnet-684669:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:14:15.616165  412924 cli_runner.go:164] Run: docker container inspect kindnet-684669 --format={{.State.Running}}
	I1017 20:14:15.635817  412924 cli_runner.go:164] Run: docker container inspect kindnet-684669 --format={{.State.Status}}
	I1017 20:14:15.657451  412924 cli_runner.go:164] Run: docker exec kindnet-684669 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:14:15.708928  412924 oci.go:144] the created container "kindnet-684669" has a running status.
	I1017 20:14:15.708971  412924 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa...
	I1017 20:14:15.938780  412924 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:14:15.973031  412924 cli_runner.go:164] Run: docker container inspect kindnet-684669 --format={{.State.Status}}
	I1017 20:14:15.996064  412924 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:14:15.996094  412924 kic_runner.go:114] Args: [docker exec --privileged kindnet-684669 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:14:16.043663  412924 cli_runner.go:164] Run: docker container inspect kindnet-684669 --format={{.State.Status}}
	I1017 20:14:16.065167  412924 machine.go:93] provisionDockerMachine start ...
	I1017 20:14:16.065275  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:16.086589  412924 main.go:141] libmachine: Using SSH client type: native
	I1017 20:14:16.086907  412924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1017 20:14:16.086927  412924 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:14:16.225702  412924 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-684669
	
	I1017 20:14:16.225734  412924 ubuntu.go:182] provisioning hostname "kindnet-684669"
	I1017 20:14:16.225819  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:16.245685  412924 main.go:141] libmachine: Using SSH client type: native
	I1017 20:14:16.245966  412924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1017 20:14:16.245983  412924 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-684669 && echo "kindnet-684669" | sudo tee /etc/hostname
	I1017 20:14:16.394498  412924 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-684669
	
	I1017 20:14:16.394596  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:16.413806  412924 main.go:141] libmachine: Using SSH client type: native
	I1017 20:14:16.414043  412924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1017 20:14:16.414071  412924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-684669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-684669/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-684669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:14:16.554685  412924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:14:16.554720  412924 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:14:16.554784  412924 ubuntu.go:190] setting up certificates
	I1017 20:14:16.554798  412924 provision.go:84] configureAuth start
	I1017 20:14:16.554860  412924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-684669
	I1017 20:14:16.574011  412924 provision.go:143] copyHostCerts
	I1017 20:14:16.574095  412924 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:14:16.574114  412924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:14:16.574200  412924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:14:16.574314  412924 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:14:16.574338  412924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:14:16.574383  412924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:14:16.574477  412924 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:14:16.574488  412924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:14:16.574526  412924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:14:16.574615  412924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.kindnet-684669 san=[127.0.0.1 192.168.94.2 kindnet-684669 localhost minikube]
	I1017 20:14:16.675706  412924 provision.go:177] copyRemoteCerts
	I1017 20:14:16.675799  412924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:14:16.675851  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:16.694306  412924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa Username:docker}
	I1017 20:14:16.792239  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:14:16.813939  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1017 20:14:16.832601  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:14:16.851534  412924 provision.go:87] duration metric: took 296.717779ms to configureAuth
	I1017 20:14:16.851569  412924 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:14:16.851791  412924 config.go:182] Loaded profile config "kindnet-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:16.851915  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:16.870507  412924 main.go:141] libmachine: Using SSH client type: native
	I1017 20:14:16.870755  412924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1017 20:14:16.870777  412924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:14:17.125313  412924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:14:17.125338  412924 machine.go:96] duration metric: took 1.060143778s to provisionDockerMachine
	I1017 20:14:17.125351  412924 client.go:171] duration metric: took 7.250719495s to LocalClient.Create
	I1017 20:14:17.125372  412924 start.go:167] duration metric: took 7.250778897s to libmachine.API.Create "kindnet-684669"
	I1017 20:14:17.125381  412924 start.go:293] postStartSetup for "kindnet-684669" (driver="docker")
	I1017 20:14:17.125392  412924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:14:17.125454  412924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:14:17.125503  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:17.144041  412924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa Username:docker}
	I1017 20:14:17.243483  412924 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:14:17.247444  412924 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:14:17.247469  412924 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:14:17.247480  412924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:14:17.247533  412924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:14:17.247621  412924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:14:17.247758  412924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:14:17.256386  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:14:17.279237  412924 start.go:296] duration metric: took 153.839594ms for postStartSetup
	I1017 20:14:17.279621  412924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-684669
	I1017 20:14:17.298349  412924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/config.json ...
	I1017 20:14:17.298659  412924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:14:17.298707  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:17.317475  412924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa Username:docker}
	I1017 20:14:17.414249  412924 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:14:17.419272  412924 start.go:128] duration metric: took 7.547241521s to createHost
	I1017 20:14:17.419303  412924 start.go:83] releasing machines lock for "kindnet-684669", held for 7.547404885s
	I1017 20:14:17.419374  412924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-684669
	I1017 20:14:17.437439  412924 ssh_runner.go:195] Run: cat /version.json
	I1017 20:14:17.437493  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:17.437507  412924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:14:17.437564  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:17.457561  412924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa Username:docker}
	I1017 20:14:17.457561  412924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa Username:docker}
	I1017 20:14:17.609556  412924 ssh_runner.go:195] Run: systemctl --version
	I1017 20:14:17.616418  412924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:14:17.653503  412924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:14:17.658767  412924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:14:17.658839  412924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:14:17.688232  412924 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 20:14:17.688260  412924 start.go:495] detecting cgroup driver to use...
	I1017 20:14:17.688297  412924 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:14:17.688344  412924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:14:17.706555  412924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:14:17.719818  412924 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:14:17.719872  412924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:14:17.737118  412924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:14:17.756244  412924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:14:17.842189  412924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:14:17.935434  412924 docker.go:234] disabling docker service ...
	I1017 20:14:17.935509  412924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:14:17.956207  412924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:14:17.970490  412924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:14:18.061242  412924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:14:18.148014  412924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:14:18.161296  412924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:14:18.176509  412924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:14:18.176569  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.190097  412924 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:14:18.190169  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.199733  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.209560  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.218921  412924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:14:18.227606  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.236831  412924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.251330  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.260778  412924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:14:18.269243  412924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:14:18.277263  412924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:14:18.359763  412924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:14:18.469347  412924 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:14:18.469423  412924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:14:18.473692  412924 start.go:563] Will wait 60s for crictl version
	I1017 20:14:18.473790  412924 ssh_runner.go:195] Run: which crictl
	I1017 20:14:18.477582  412924 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:14:18.504980  412924 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:14:18.505060  412924 ssh_runner.go:195] Run: crio --version
	I1017 20:14:18.534168  412924 ssh_runner.go:195] Run: crio --version
	I1017 20:14:18.565081  412924 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:14:18.566563  412924 cli_runner.go:164] Run: docker network inspect kindnet-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:14:18.584866  412924 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1017 20:14:18.589128  412924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:14:18.600036  412924 kubeadm.go:883] updating cluster {Name:kindnet-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:14:18.600143  412924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:14:18.600207  412924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:14:18.633892  412924 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:14:18.633913  412924 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:14:18.633959  412924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:14:18.661833  412924 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:14:18.661856  412924 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:14:18.661864  412924 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1017 20:14:18.661949  412924 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-684669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1017 20:14:18.662007  412924 ssh_runner.go:195] Run: crio config
	I1017 20:14:18.710788  412924 cni.go:84] Creating CNI manager for "kindnet"
	I1017 20:14:18.710820  412924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:14:18.710847  412924 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-684669 NodeName:kindnet-684669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:14:18.711000  412924 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-684669"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:14:18.711074  412924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:14:18.719898  412924 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:14:18.719955  412924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:14:18.728188  412924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1017 20:14:18.741671  412924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:14:18.758533  412924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1017 20:14:18.772250  412924 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:14:18.776180  412924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:14:18.786665  412924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:14:18.870764  412924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:14:18.892659  412924 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669 for IP: 192.168.94.2
	I1017 20:14:18.892684  412924 certs.go:195] generating shared ca certs ...
	I1017 20:14:18.892707  412924 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:18.892916  412924 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:14:18.892983  412924 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:14:18.892997  412924 certs.go:257] generating profile certs ...
	I1017 20:14:18.893077  412924 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.key
	I1017 20:14:18.893103  412924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.crt with IP's: []
	I1017 20:14:19.033448  412924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.crt ...
	I1017 20:14:19.033477  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.crt: {Name:mk2a57d317a69e1a17a17f2649a36a4468e31c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.033656  412924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.key ...
	I1017 20:14:19.033667  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.key: {Name:mk27fc367e6992c5aa4115122d8df0c5bdbcea28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.033759  412924 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key.d43a5f48
	I1017 20:14:19.033774  412924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt.d43a5f48 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1017 20:14:19.396349  412924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt.d43a5f48 ...
	I1017 20:14:19.396385  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt.d43a5f48: {Name:mkc94fc19212a4862771e31695dcfb01f79ee99f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.396549  412924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key.d43a5f48 ...
	I1017 20:14:19.396562  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key.d43a5f48: {Name:mka74221e3a37dec5c10e028c66239411e71088c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.396634  412924 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt.d43a5f48 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt
	I1017 20:14:19.396749  412924 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key.d43a5f48 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key
	I1017 20:14:19.396821  412924 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.key
	I1017 20:14:19.396839  412924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.crt with IP's: []
	W1017 20:14:18.941857  407971 node_ready.go:57] node "auto-684669" has "Ready":"False" status (will retry)
	W1017 20:14:20.942410  407971 node_ready.go:57] node "auto-684669" has "Ready":"False" status (will retry)
	I1017 20:14:19.701187  412924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.crt ...
	I1017 20:14:19.701217  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.crt: {Name:mk2e7fb78a805d1801962648d2d9cc4926d45b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.701395  412924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.key ...
	I1017 20:14:19.701410  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.key: {Name:mk864dc0643cd858464fee4246a0effbe4361716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.701607  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:14:19.701653  412924 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:14:19.701664  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:14:19.701686  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:14:19.701708  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:14:19.701728  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:14:19.701799  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:14:19.702559  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:14:19.721898  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:14:19.740825  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:14:19.759946  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:14:19.779166  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 20:14:19.798773  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:14:19.817157  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:14:19.835563  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:14:19.853974  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:14:19.874815  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:14:19.893317  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:14:19.914026  412924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:14:19.927464  412924 ssh_runner.go:195] Run: openssl version
	I1017 20:14:19.933999  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:14:19.943149  412924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:14:19.947209  412924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:14:19.947268  412924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:14:19.982111  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:14:19.991603  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:14:20.001476  412924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:14:20.005756  412924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:14:20.005938  412924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:14:20.041113  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:14:20.050962  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:14:20.060575  412924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:14:20.064868  412924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:14:20.064926  412924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:14:20.099759  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:14:20.109788  412924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:14:20.113825  412924 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:14:20.113875  412924 kubeadm.go:400] StartCluster: {Name:kindnet-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:14:20.113936  412924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:14:20.113977  412924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:14:20.145064  412924 cri.go:89] found id: ""
	I1017 20:14:20.145136  412924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:14:20.154049  412924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:14:20.162801  412924 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:14:20.162862  412924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:14:20.171481  412924 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:14:20.171501  412924 kubeadm.go:157] found existing configuration files:
	
	I1017 20:14:20.171541  412924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:14:20.179816  412924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:14:20.179877  412924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:14:20.188277  412924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:14:20.197397  412924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:14:20.197452  412924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:14:20.206296  412924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:14:20.214773  412924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:14:20.214835  412924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:14:20.223121  412924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:14:20.231529  412924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:14:20.231595  412924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:14:20.239822  412924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:14:20.318712  412924 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 20:14:20.388635  412924 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1017 20:14:22.942582  407971 node_ready.go:57] node "auto-684669" has "Ready":"False" status (will retry)
	I1017 20:14:23.441617  407971 node_ready.go:49] node "auto-684669" is "Ready"
	I1017 20:14:23.441656  407971 node_ready.go:38] duration metric: took 11.003119715s for node "auto-684669" to be "Ready" ...
	I1017 20:14:23.441673  407971 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:14:23.441733  407971 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:14:23.455702  407971 api_server.go:72] duration metric: took 11.335689053s to wait for apiserver process to appear ...
	I1017 20:14:23.455734  407971 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:14:23.455769  407971 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:14:23.460131  407971 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 20:14:23.461312  407971 api_server.go:141] control plane version: v1.34.1
	I1017 20:14:23.461344  407971 api_server.go:131] duration metric: took 5.590557ms to wait for apiserver health ...
	I1017 20:14:23.461355  407971 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:14:23.464560  407971 system_pods.go:59] 8 kube-system pods found
	I1017 20:14:23.464589  407971 system_pods.go:61] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:14:23.464599  407971 system_pods.go:61] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:23.464607  407971 system_pods.go:61] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:23.464612  407971 system_pods.go:61] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:23.464618  407971 system_pods.go:61] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:23.464623  407971 system_pods.go:61] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:23.464628  407971 system_pods.go:61] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:23.464634  407971 system_pods.go:61] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:14:23.464655  407971 system_pods.go:74] duration metric: took 3.284171ms to wait for pod list to return data ...
	I1017 20:14:23.464667  407971 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:14:23.467272  407971 default_sa.go:45] found service account: "default"
	I1017 20:14:23.467298  407971 default_sa.go:55] duration metric: took 2.624931ms for default service account to be created ...
	I1017 20:14:23.467307  407971 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:14:23.471985  407971 system_pods.go:86] 8 kube-system pods found
	I1017 20:14:23.472028  407971 system_pods.go:89] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:14:23.472038  407971 system_pods.go:89] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:23.472046  407971 system_pods.go:89] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:23.472052  407971 system_pods.go:89] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:23.472064  407971 system_pods.go:89] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:23.472073  407971 system_pods.go:89] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:23.472078  407971 system_pods.go:89] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:23.472085  407971 system_pods.go:89] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:14:23.472111  407971 retry.go:31] will retry after 281.380432ms: missing components: kube-dns
	I1017 20:14:23.758176  407971 system_pods.go:86] 8 kube-system pods found
	I1017 20:14:23.758219  407971 system_pods.go:89] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:14:23.758228  407971 system_pods.go:89] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:23.758237  407971 system_pods.go:89] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:23.758242  407971 system_pods.go:89] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:23.758249  407971 system_pods.go:89] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:23.758260  407971 system_pods.go:89] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:23.758265  407971 system_pods.go:89] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:23.758278  407971 system_pods.go:89] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:14:23.758300  407971 retry.go:31] will retry after 339.214284ms: missing components: kube-dns
	I1017 20:14:24.101645  407971 system_pods.go:86] 8 kube-system pods found
	I1017 20:14:24.101696  407971 system_pods.go:89] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:14:24.101706  407971 system_pods.go:89] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:24.101714  407971 system_pods.go:89] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:24.101719  407971 system_pods.go:89] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:24.101725  407971 system_pods.go:89] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:24.101734  407971 system_pods.go:89] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:24.101773  407971 system_pods.go:89] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:24.101787  407971 system_pods.go:89] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:14:24.101808  407971 retry.go:31] will retry after 359.497927ms: missing components: kube-dns
	I1017 20:14:24.466984  407971 system_pods.go:86] 8 kube-system pods found
	I1017 20:14:24.467036  407971 system_pods.go:89] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:14:24.467046  407971 system_pods.go:89] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:24.467054  407971 system_pods.go:89] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:24.467060  407971 system_pods.go:89] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:24.467067  407971 system_pods.go:89] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:24.467081  407971 system_pods.go:89] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:24.467092  407971 system_pods.go:89] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:24.467100  407971 system_pods.go:89] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:14:24.467123  407971 retry.go:31] will retry after 563.200817ms: missing components: kube-dns
	I1017 20:14:25.036239  407971 system_pods.go:86] 8 kube-system pods found
	I1017 20:14:25.036278  407971 system_pods.go:89] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Running
	I1017 20:14:25.036286  407971 system_pods.go:89] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:25.036293  407971 system_pods.go:89] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:25.036298  407971 system_pods.go:89] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:25.036314  407971 system_pods.go:89] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:25.036320  407971 system_pods.go:89] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:25.036328  407971 system_pods.go:89] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:25.036333  407971 system_pods.go:89] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Running
	I1017 20:14:25.036344  407971 system_pods.go:126] duration metric: took 1.569030648s to wait for k8s-apps to be running ...
	I1017 20:14:25.036355  407971 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:14:25.036407  407971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:14:25.055211  407971 system_svc.go:56] duration metric: took 18.842579ms WaitForService to wait for kubelet
	I1017 20:14:25.055247  407971 kubeadm.go:586] duration metric: took 12.935239172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:14:25.055270  407971 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:14:25.058768  407971 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:14:25.058799  407971 node_conditions.go:123] node cpu capacity is 8
	I1017 20:14:25.058815  407971 node_conditions.go:105] duration metric: took 3.538662ms to run NodePressure ...
	I1017 20:14:25.058834  407971 start.go:241] waiting for startup goroutines ...
	I1017 20:14:25.058845  407971 start.go:246] waiting for cluster config update ...
	I1017 20:14:25.058862  407971 start.go:255] writing updated cluster config ...
	I1017 20:14:25.059198  407971 ssh_runner.go:195] Run: rm -f paused
	I1017 20:14:25.064122  407971 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:14:25.071721  407971 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5qbtt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.078858  407971 pod_ready.go:94] pod "coredns-66bc5c9577-5qbtt" is "Ready"
	I1017 20:14:25.078893  407971 pod_ready.go:86] duration metric: took 7.118956ms for pod "coredns-66bc5c9577-5qbtt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.081975  407971 pod_ready.go:83] waiting for pod "etcd-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.090535  407971 pod_ready.go:94] pod "etcd-auto-684669" is "Ready"
	I1017 20:14:25.090579  407971 pod_ready.go:86] duration metric: took 8.551128ms for pod "etcd-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.094206  407971 pod_ready.go:83] waiting for pod "kube-apiserver-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.099837  407971 pod_ready.go:94] pod "kube-apiserver-auto-684669" is "Ready"
	I1017 20:14:25.099866  407971 pod_ready.go:86] duration metric: took 5.629519ms for pod "kube-apiserver-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.102675  407971 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.469183  407971 pod_ready.go:94] pod "kube-controller-manager-auto-684669" is "Ready"
	I1017 20:14:25.469215  407971 pod_ready.go:86] duration metric: took 366.510428ms for pod "kube-controller-manager-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.669005  407971 pod_ready.go:83] waiting for pod "kube-proxy-nwck8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:26.068993  407971 pod_ready.go:94] pod "kube-proxy-nwck8" is "Ready"
	I1017 20:14:26.069025  407971 pod_ready.go:86] duration metric: took 399.993169ms for pod "kube-proxy-nwck8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:26.269553  407971 pod_ready.go:83] waiting for pod "kube-scheduler-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:26.669076  407971 pod_ready.go:94] pod "kube-scheduler-auto-684669" is "Ready"
	I1017 20:14:26.669111  407971 pod_ready.go:86] duration metric: took 399.530008ms for pod "kube-scheduler-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:26.669146  407971 pod_ready.go:40] duration metric: took 1.604975691s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:14:26.716526  407971 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:14:26.718661  407971 out.go:179] * Done! kubectl is now configured to use "auto-684669" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:13:51 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:13:51.505300186Z" level=info msg="Started container" PID=1740 containerID=06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper id=0b9d35e2-b997-47f4-b2f8-922d4a4ef785 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a33e30f01196a37b95050b7072f0f3034337c96f365dc0cd1e80d2fa9406929f
	Oct 17 20:13:52 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:13:52.465841333Z" level=info msg="Removing container: ae50bbd6796debe87db9fa46ef2949d3d8e26fb48382392d370f79e77a535888" id=f4865a32-1e07-4b3d-90c4-e31414f2b8e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:13:52 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:13:52.476665181Z" level=info msg="Removed container ae50bbd6796debe87db9fa46ef2949d3d8e26fb48382392d370f79e77a535888: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper" id=f4865a32-1e07-4b3d-90c4-e31414f2b8e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.517799112Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25b6cfa8-b2f4-4185-9ae2-d0eab1eabc18 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.518820563Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4252307b-55e9-4a7f-8391-4ffe4b887106 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.520111447Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f831c736-6f99-4acf-ad66-76d82d61f2f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.52041544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.526587145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.526783562Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/293fbde659b1c1854edb8e09c88e01caa25930e20fecbc9f95f33400cfec2a0b/merged/etc/passwd: no such file or directory"
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.52681169Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/293fbde659b1c1854edb8e09c88e01caa25930e20fecbc9f95f33400cfec2a0b/merged/etc/group: no such file or directory"
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.527642119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.559079418Z" level=info msg="Created container a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd: kube-system/storage-provisioner/storage-provisioner" id=f831c736-6f99-4acf-ad66-76d82d61f2f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.560218615Z" level=info msg="Starting container: a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd" id=abb6d03e-716a-4ee0-8cde-3f72d6518815 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.562956607Z" level=info msg="Started container" PID=1755 containerID=a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd description=kube-system/storage-provisioner/storage-provisioner id=abb6d03e-716a-4ee0-8cde-3f72d6518815 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4d35746c39f30639b455f08c950558a3d3a4ae1b1f0f4b06f3389a62031478d
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.390531742Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=889aff3d-adea-49c4-8f3c-db6bc3eb808d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.391593035Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=de0476f0-340e-4eba-b37f-80eff7a7a072 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.392799403Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper" id=eb09e89a-ed1f-4b33-8a05-e1a822ca1446 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.393102511Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.399152584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.399864725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.42827861Z" level=info msg="Created container d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper" id=eb09e89a-ed1f-4b33-8a05-e1a822ca1446 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.429102666Z" level=info msg="Starting container: d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2" id=335fb069-6ba5-481d-8a3f-4edc2b8b805c name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.431436495Z" level=info msg="Started container" PID=1789 containerID=d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper id=335fb069-6ba5-481d-8a3f-4edc2b8b805c name=/runtime.v1.RuntimeService/StartContainer sandboxID=a33e30f01196a37b95050b7072f0f3034337c96f365dc0cd1e80d2fa9406929f
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.535914705Z" level=info msg="Removing container: 06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af" id=3cd07fd8-b355-4324-98ca-46e1b003ee69 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.547581232Z" level=info msg="Removed container 06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper" id=3cd07fd8-b355-4324-98ca-46e1b003ee69 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	d1658a45187f3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   2                   a33e30f01196a       dashboard-metrics-scraper-6ffb444bf9-lh7m9             kubernetes-dashboard
	a7cd25c03695c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   a4d35746c39f3       storage-provisioner                                    kube-system
	dec17f1d9027d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   3e9afcfc07afb       kubernetes-dashboard-855c9754f9-cfv55                  kubernetes-dashboard
	620478bbf7c35       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   8d01956ed30b0       coredns-66bc5c9577-bsp94                               kube-system
	6f2500593565c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   03750cd46c13f       busybox                                                default
	c4a95fedc4957       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   37e62db286f1b       kindnet-gzsxs                                          kube-system
	befec0b605a11       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   e2464514adfa6       kube-proxy-g7749                                       kube-system
	f6fedb384a1ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   a4d35746c39f3       storage-provisioner                                    kube-system
	c595776216f07       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   9a8c0bb72d31e       kube-apiserver-default-k8s-diff-port-563805            kube-system
	8b04285c22247       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   78b31eee621aa       etcd-default-k8s-diff-port-563805                      kube-system
	3921f3f537505       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   ed08be654bd73       kube-scheduler-default-k8s-diff-port-563805            kube-system
	304a87295c1b6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   d5d538f3961cd       kube-controller-manager-default-k8s-diff-port-563805   kube-system
	
	
	==> coredns [620478bbf7c357ce43fdb113d1af8b156c3f06537ebbde3f375835b749f63165] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42205 - 26352 "HINFO IN 3984473532376090302.5320220858447455705. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.102008098s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-563805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-563805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=default-k8s-diff-port-563805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_12_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:12:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-563805
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:14:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:14:09 +0000   Fri, 17 Oct 2025 20:12:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:14:09 +0000   Fri, 17 Oct 2025 20:12:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:14:09 +0000   Fri, 17 Oct 2025 20:12:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:14:09 +0000   Fri, 17 Oct 2025 20:12:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-563805
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8216883e-3ed5-4f7d-8ef7-444b758f4457
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-bsp94                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-default-k8s-diff-port-563805                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-gzsxs                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-563805             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-563805    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-g7749                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-563805             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lh7m9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cfv55                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           105s               node-controller  Node default-k8s-diff-port-563805 event: Registered Node default-k8s-diff-port-563805 in Controller
	  Normal  NodeReady                93s                kubelet          Node default-k8s-diff-port-563805 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node default-k8s-diff-port-563805 event: Registered Node default-k8s-diff-port-563805 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [8b04285c222479d3b2ea10ca1123a4893d4e6350366905f40c907646a9f3259c] <==
	{"level":"warn","ts":"2025-10-17T20:13:38.111187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.118872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.127356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.136539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.143993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.151600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.159885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.174024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.177858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.193600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.252260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54006","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:13:46.561982Z","caller":"traceutil/trace.go:172","msg":"trace[2065924850] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"159.978583ms","start":"2025-10-17T20:13:46.401970Z","end":"2025-10-17T20:13:46.561948Z","steps":["trace[2065924850] 'process raft request'  (duration: 101.858557ms)","trace[2065924850] 'compare'  (duration: 58.018754ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:47.256659Z","caller":"traceutil/trace.go:172","msg":"trace[511270124] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"130.623243ms","start":"2025-10-17T20:13:47.126013Z","end":"2025-10-17T20:13:47.256636Z","steps":["trace[511270124] 'process raft request'  (duration: 128.859822ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:13:47.418573Z","caller":"traceutil/trace.go:172","msg":"trace[1494539764] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"156.777886ms","start":"2025-10-17T20:13:47.261770Z","end":"2025-10-17T20:13:47.418548Z","steps":["trace[1494539764] 'process raft request'  (duration: 135.264492ms)","trace[1494539764] 'compare'  (duration: 21.390588ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:47.565673Z","caller":"traceutil/trace.go:172","msg":"trace[951471414] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"142.086255ms","start":"2025-10-17T20:13:47.423556Z","end":"2025-10-17T20:13:47.565643Z","steps":["trace[951471414] 'process raft request'  (duration: 127.271842ms)","trace[951471414] 'compare'  (duration: 14.405914ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:47.715526Z","caller":"traceutil/trace.go:172","msg":"trace[1498324144] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"144.592763ms","start":"2025-10-17T20:13:47.570911Z","end":"2025-10-17T20:13:47.715503Z","steps":["trace[1498324144] 'process raft request'  (duration: 117.083871ms)","trace[1498324144] 'compare'  (duration: 27.376489ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:47.987017Z","caller":"traceutil/trace.go:172","msg":"trace[852530649] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"177.121462ms","start":"2025-10-17T20:13:47.809866Z","end":"2025-10-17T20:13:47.986988Z","steps":["trace[852530649] 'process raft request'  (duration: 123.154334ms)","trace[852530649] 'compare'  (duration: 53.604575ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T20:13:48.343987Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.498836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-bsp94\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-10-17T20:13:48.344172Z","caller":"traceutil/trace.go:172","msg":"trace[903936769] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-bsp94; range_end:; response_count:1; response_revision:526; }","duration":"287.745049ms","start":"2025-10-17T20:13:48.056408Z","end":"2025-10-17T20:13:48.344153Z","steps":["trace[903936769] 'agreement among raft nodes before linearized reading'  (duration: 72.904648ms)","trace[903936769] 'range keys from in-memory index tree'  (duration: 214.473644ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T20:13:48.344780Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"214.669085ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596442982395777 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" mod_revision:524 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" value_size:690 lease:499224406127619844 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-17T20:13:48.344874Z","caller":"traceutil/trace.go:172","msg":"trace[1296264633] transaction","detail":"{read_only:false; response_revision:527; number_of_response:1; }","duration":"345.012482ms","start":"2025-10-17T20:13:47.999848Z","end":"2025-10-17T20:13:48.344861Z","steps":["trace[1296264633] 'process raft request'  (duration: 129.470485ms)","trace[1296264633] 'compare'  (duration: 214.5254ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T20:13:48.344934Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T20:13:47.999814Z","time spent":"345.082821ms","remote":"127.0.0.1:53024","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":778,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" mod_revision:524 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" value_size:690 lease:499224406127619844 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" > >"}
	{"level":"warn","ts":"2025-10-17T20:13:48.586654Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.786997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T20:13:48.586726Z","caller":"traceutil/trace.go:172","msg":"trace[952525593] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:530; }","duration":"143.865424ms","start":"2025-10-17T20:13:48.442844Z","end":"2025-10-17T20:13:48.586709Z","steps":["trace[952525593] 'agreement among raft nodes before linearized reading'  (duration: 84.098089ms)","trace[952525593] 'range keys from in-memory index tree'  (duration: 59.659352ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:48.586824Z","caller":"traceutil/trace.go:172","msg":"trace[458332569] transaction","detail":"{read_only:false; response_revision:531; number_of_response:1; }","duration":"183.909471ms","start":"2025-10-17T20:13:48.402893Z","end":"2025-10-17T20:13:48.586803Z","steps":["trace[458332569] 'process raft request'  (duration: 124.087536ms)","trace[458332569] 'compare'  (duration: 59.607168ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:14:28 up  1:56,  0 user,  load average: 4.45, 4.65, 3.02
	Linux default-k8s-diff-port-563805 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c4a95fedc4957f2772d4188de75f2d0b0715d0ead81d66093c1bb82a882026d5] <==
	I1017 20:13:39.999637       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:13:39.999943       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 20:13:40.000155       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:13:40.000173       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:13:40.000198       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:13:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:13:40.296948       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:13:40.297075       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:13:40.297096       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:13:40.297282       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:13:40.697318       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:13:40.697353       1 metrics.go:72] Registering metrics
	I1017 20:13:40.697469       1 controller.go:711] "Syncing nftables rules"
	I1017 20:13:50.259344       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:13:50.259424       1 main.go:301] handling current node
	I1017 20:14:00.259035       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:14:00.259086       1 main.go:301] handling current node
	I1017 20:14:10.259313       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:14:10.259368       1 main.go:301] handling current node
	I1017 20:14:20.258797       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:14:20.258831       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c595776216f076fd092a3194172be36c923143b82bc0c107305659b192166d72] <==
	I1017 20:13:38.909038       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:13:38.910872       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:13:38.910892       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:13:38.910900       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:13:38.910907       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:13:38.911191       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:13:38.911356       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:13:38.920843       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:13:38.921785       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1017 20:13:38.923555       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:13:38.967420       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:13:38.976310       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 20:13:39.317594       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:13:39.350171       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:13:39.375891       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:13:39.384873       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:13:39.394460       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:13:39.435666       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.223.121"}
	I1017 20:13:39.446767       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.245.135"}
	I1017 20:13:39.808466       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:13:42.456710       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:13:42.855723       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:13:42.855723       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:13:42.906127       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:13:42.906127       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [304a87295c1b69a58634803b264b8f89d380003a2081fe68a13fad1c6406af7c] <==
	I1017 20:13:42.302789       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:13:42.302891       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-563805"
	I1017 20:13:42.302948       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:13:42.303639       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:13:42.305966       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:13:42.308311       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:13:42.308660       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:13:42.323806       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:13:42.327147       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:13:42.332288       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:13:42.337594       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:13:42.339924       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:13:42.345272       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:13:42.345295       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:13:42.345316       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:13:42.349638       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:13:42.351661       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:13:42.352723       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:13:42.352787       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:13:42.353957       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:13:42.358481       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:13:42.358534       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:13:42.358573       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:13:42.358586       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:13:42.358593       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	
	
	==> kube-proxy [befec0b605a11944db3aa5e1626c300e786a26bec9be6f5bef7d94439e2b74cd] <==
	I1017 20:13:39.798668       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:13:39.860824       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:13:39.960961       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:13:39.961001       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 20:13:39.961107       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:13:39.986277       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:13:39.986341       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:13:39.993539       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:13:39.994035       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:13:39.994074       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:39.998297       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:13:39.998326       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:13:39.998365       1 config.go:200] "Starting service config controller"
	I1017 20:13:39.998372       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:13:39.998399       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:13:39.998397       1 config.go:309] "Starting node config controller"
	I1017 20:13:39.998412       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:13:39.998419       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:13:39.998405       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:13:40.098592       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:13:40.098557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:13:40.098691       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3921f3f5375050e83141087f7f8ca522220b109c30ad4b4d1d6c09216bc51b9b] <==
	I1017 20:13:38.032136       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:13:39.229535       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:13:39.229566       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:39.235471       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:13:39.235465       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:13:39.235519       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:13:39.235477       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:39.235587       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:39.235519       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:13:39.236559       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:13:39.236653       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:13:39.336912       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 20:13:39.336979       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:39.336919       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:13:42 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:42.901459     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8cb77f18-44bb-401c-b230-621ccb6ff4a4-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cfv55\" (UID: \"8cb77f18-44bb-401c-b230-621ccb6ff4a4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cfv55"
	Oct 17 20:13:42 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:42.901507     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zpbp\" (UniqueName: \"kubernetes.io/projected/2765cd74-c48c-40ad-8ac1-fb1a758dcd41-kube-api-access-5zpbp\") pod \"dashboard-metrics-scraper-6ffb444bf9-lh7m9\" (UID: \"2765cd74-c48c-40ad-8ac1-fb1a758dcd41\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9"
	Oct 17 20:13:42 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:42.901526     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2765cd74-c48c-40ad-8ac1-fb1a758dcd41-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-lh7m9\" (UID: \"2765cd74-c48c-40ad-8ac1-fb1a758dcd41\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9"
	Oct 17 20:13:42 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:42.901632     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzgr4\" (UniqueName: \"kubernetes.io/projected/8cb77f18-44bb-401c-b230-621ccb6ff4a4-kube-api-access-kzgr4\") pod \"kubernetes-dashboard-855c9754f9-cfv55\" (UID: \"8cb77f18-44bb-401c-b230-621ccb6ff4a4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cfv55"
	Oct 17 20:13:49 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:49.474343     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cfv55" podStartSLOduration=1.843655536 podStartE2EDuration="7.474309742s" podCreationTimestamp="2025-10-17 20:13:42 +0000 UTC" firstStartedPulling="2025-10-17 20:13:43.115060649 +0000 UTC m=+6.833479908" lastFinishedPulling="2025-10-17 20:13:48.745714842 +0000 UTC m=+12.464134114" observedRunningTime="2025-10-17 20:13:49.474296413 +0000 UTC m=+13.192715693" watchObservedRunningTime="2025-10-17 20:13:49.474309742 +0000 UTC m=+13.192729022"
	Oct 17 20:13:51 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:51.459428     722 scope.go:117] "RemoveContainer" containerID="ae50bbd6796debe87db9fa46ef2949d3d8e26fb48382392d370f79e77a535888"
	Oct 17 20:13:52 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:52.464184     722 scope.go:117] "RemoveContainer" containerID="ae50bbd6796debe87db9fa46ef2949d3d8e26fb48382392d370f79e77a535888"
	Oct 17 20:13:52 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:52.464492     722 scope.go:117] "RemoveContainer" containerID="06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af"
	Oct 17 20:13:52 default-k8s-diff-port-563805 kubelet[722]: E1017 20:13:52.464728     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lh7m9_kubernetes-dashboard(2765cd74-c48c-40ad-8ac1-fb1a758dcd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9" podUID="2765cd74-c48c-40ad-8ac1-fb1a758dcd41"
	Oct 17 20:13:53 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:53.469621     722 scope.go:117] "RemoveContainer" containerID="06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af"
	Oct 17 20:13:53 default-k8s-diff-port-563805 kubelet[722]: E1017 20:13:53.469798     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lh7m9_kubernetes-dashboard(2765cd74-c48c-40ad-8ac1-fb1a758dcd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9" podUID="2765cd74-c48c-40ad-8ac1-fb1a758dcd41"
	Oct 17 20:13:59 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:59.893906     722 scope.go:117] "RemoveContainer" containerID="06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af"
	Oct 17 20:13:59 default-k8s-diff-port-563805 kubelet[722]: E1017 20:13:59.894128     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lh7m9_kubernetes-dashboard(2765cd74-c48c-40ad-8ac1-fb1a758dcd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9" podUID="2765cd74-c48c-40ad-8ac1-fb1a758dcd41"
	Oct 17 20:14:10 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:10.517317     722 scope.go:117] "RemoveContainer" containerID="f6fedb384a1ad00b57204bbb8a84f0877c763ba980fe5fe9bdd6d9fd495b8981"
	Oct 17 20:14:15 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:15.389885     722 scope.go:117] "RemoveContainer" containerID="06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af"
	Oct 17 20:14:15 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:15.534428     722 scope.go:117] "RemoveContainer" containerID="06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af"
	Oct 17 20:14:15 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:15.534690     722 scope.go:117] "RemoveContainer" containerID="d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2"
	Oct 17 20:14:15 default-k8s-diff-port-563805 kubelet[722]: E1017 20:14:15.535112     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lh7m9_kubernetes-dashboard(2765cd74-c48c-40ad-8ac1-fb1a758dcd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9" podUID="2765cd74-c48c-40ad-8ac1-fb1a758dcd41"
	Oct 17 20:14:19 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:19.893924     722 scope.go:117] "RemoveContainer" containerID="d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2"
	Oct 17 20:14:19 default-k8s-diff-port-563805 kubelet[722]: E1017 20:14:19.894116     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lh7m9_kubernetes-dashboard(2765cd74-c48c-40ad-8ac1-fb1a758dcd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9" podUID="2765cd74-c48c-40ad-8ac1-fb1a758dcd41"
	Oct 17 20:14:25 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:25.218040     722 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 17 20:14:25 default-k8s-diff-port-563805 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:14:25 default-k8s-diff-port-563805 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:14:25 default-k8s-diff-port-563805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 20:14:25 default-k8s-diff-port-563805 systemd[1]: kubelet.service: Consumed 1.735s CPU time.
	
	
	==> kubernetes-dashboard [dec17f1d9027dfa31aeaa2dc6ea73f5f3ea06821f779ca9a7b446e04d0051274] <==
	2025/10/17 20:13:48 Using namespace: kubernetes-dashboard
	2025/10/17 20:13:48 Using in-cluster config to connect to apiserver
	2025/10/17 20:13:48 Using secret token for csrf signing
	2025/10/17 20:13:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:13:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:13:48 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:13:48 Generating JWE encryption key
	2025/10/17 20:13:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:13:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:13:49 Initializing JWE encryption key from synchronized object
	2025/10/17 20:13:49 Creating in-cluster Sidecar client
	2025/10/17 20:13:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:13:49 Serving insecurely on HTTP port: 9090
	2025/10/17 20:14:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:13:48 Starting overwatch
	
	
	==> storage-provisioner [a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd] <==
	I1017 20:14:10.578160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:14:10.588213       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:14:10.588375       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:14:10.598930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:14.055938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:18.316083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:21.915060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:24.969106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:27.994723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:28.005109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:14:28.005610       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:14:28.005710       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7cbc6369-13f8-42ff-8d5e-a08248991cf2", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-563805_09088fe6-096b-44dc-b2af-2ca91919bacd became leader
	I1017 20:14:28.005993       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-563805_09088fe6-096b-44dc-b2af-2ca91919bacd!
	W1017 20:14:28.014824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:28.028191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:14:28.106871       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-563805_09088fe6-096b-44dc-b2af-2ca91919bacd!
	
	
	==> storage-provisioner [f6fedb384a1ad00b57204bbb8a84f0877c763ba980fe5fe9bdd6d9fd495b8981] <==
	I1017 20:13:39.769272       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:14:09.774135       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805: exit status 2 (364.818175ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-563805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-563805
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-563805:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655",
	        "Created": "2025-10-17T20:12:20.619875365Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 405209,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:13:29.825614622Z",
	            "FinishedAt": "2025-10-17T20:13:28.922611176Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/hostname",
	        "HostsPath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/hosts",
	        "LogPath": "/var/lib/docker/containers/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655/7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655-json.log",
	        "Name": "/default-k8s-diff-port-563805",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-563805:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-563805",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7567eb5045980ac302873aedd99a741b2c43f3ffc7c793740b51ddf13a299655",
	                "LowerDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837-init/diff:/var/lib/docker/overlay2/fbfad8356f6358a1732e91f2e548b755c7ca75fd94f3b82c0a5a4ce9b2624c2c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9694efb013e5aed72249f05b0bbf90d3e017142a17528a152939e78b8d67d837/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-563805",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-563805/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-563805",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-563805",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-563805",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f7647ecdd4b34d2b072af45430f3b63364239613214d283cab0e42e8e962f9ef",
	            "SandboxKey": "/var/run/docker/netns/f7647ecdd4b3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33222"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-563805": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:e8:44:42:40:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9a4aaba57340b08a6dc80d718ca509a23c5f23e099fc7d8315ee78ac47b427de",
	                    "EndpointID": "68a0dcb8aa6452726bf36a3f75517275864e7cdc241c840b238e4b34ddde6dfa",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-563805",
	                        "7567eb504598"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805: exit status 2 (369.113863ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-563805 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-563805 logs -n 25: (1.206089286s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable dashboard -p embed-certs-051488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                             │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                    │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ image   │ newest-cni-051083 image list --format=json                                                                                                                                                                                │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ pause   │ -p newest-cni-051083 --alsologtostderr -v=1                                                                                                                                                                               │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-563805 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                        │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-563805 --alsologtostderr -v=3                                                                                                                                                                    │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p newest-cni-051083                                                                                                                                                                                                      │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p newest-cni-051083                                                                                                                                                                                                      │ newest-cni-051083            │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p cert-options-318223 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-660693    │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-660693    │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-563805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                   │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                  │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:14 UTC │
	│ ssh     │ cert-options-318223 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ ssh     │ -p cert-options-318223 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p cert-options-318223                                                                                                                                                                                                    │ cert-options-318223          │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p auto-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                   │ auto-684669                  │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:14 UTC │
	│ image   │ embed-certs-051488 image list --format=json                                                                                                                                                                               │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ pause   │ -p embed-certs-051488 --alsologtostderr -v=1                                                                                                                                                                              │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │                     │
	│ delete  │ -p embed-certs-051488                                                                                                                                                                                                     │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ delete  │ -p embed-certs-051488                                                                                                                                                                                                     │ embed-certs-051488           │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ start   │ -p kindnet-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                  │ kindnet-684669               │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │                     │
	│ image   │ default-k8s-diff-port-563805 image list --format=json                                                                                                                                                                     │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ pause   │ -p default-k8s-diff-port-563805 --alsologtostderr -v=1                                                                                                                                                                    │ default-k8s-diff-port-563805 │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │                     │
	│ ssh     │ -p auto-684669 pgrep -a kubelet                                                                                                                                                                                           │ auto-684669                  │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:14:09
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:14:09.670087  412924 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:14:09.670336  412924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:14:09.670344  412924 out.go:374] Setting ErrFile to fd 2...
	I1017 20:14:09.670348  412924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:14:09.670551  412924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:14:09.671070  412924 out.go:368] Setting JSON to false
	I1017 20:14:09.672353  412924 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6998,"bootTime":1760725052,"procs":429,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:14:09.672463  412924 start.go:141] virtualization: kvm guest
	I1017 20:14:09.674640  412924 out.go:179] * [kindnet-684669] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:14:09.676060  412924 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:14:09.676083  412924 notify.go:220] Checking for updates...
	I1017 20:14:09.678986  412924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:14:09.680596  412924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:14:09.682134  412924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:14:09.683631  412924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:14:09.685013  412924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:14:09.686956  412924 config.go:182] Loaded profile config "auto-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:09.687106  412924 config.go:182] Loaded profile config "default-k8s-diff-port-563805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:09.687235  412924 config.go:182] Loaded profile config "kubernetes-upgrade-660693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:09.687365  412924 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:14:09.714921  412924 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:14:09.715049  412924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:14:09.777482  412924 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:14:09.765897473 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:14:09.777650  412924 docker.go:318] overlay module found
	I1017 20:14:09.779449  412924 out.go:179] * Using the docker driver based on user configuration
	I1017 20:14:09.780904  412924 start.go:305] selected driver: docker
	I1017 20:14:09.780938  412924 start.go:925] validating driver "docker" against <nil>
	I1017 20:14:09.780956  412924 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:14:09.781614  412924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:14:09.839297  412924 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 20:14:09.829912051 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:14:09.839481  412924 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:14:09.839724  412924 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:14:09.841937  412924 out.go:179] * Using Docker driver with root privileges
	I1017 20:14:09.843394  412924 cni.go:84] Creating CNI manager for "kindnet"
	I1017 20:14:09.843415  412924 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:14:09.843488  412924 start.go:349] cluster config:
	{Name:kindnet-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:14:09.844928  412924 out.go:179] * Starting "kindnet-684669" primary control-plane node in "kindnet-684669" cluster
	I1017 20:14:09.846248  412924 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:14:09.847661  412924 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:14:09.849005  412924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:14:09.849072  412924 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 20:14:09.849085  412924 cache.go:58] Caching tarball of preloaded images
	I1017 20:14:09.849129  412924 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:14:09.849177  412924 preload.go:233] Found /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 20:14:09.849187  412924 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:14:09.849300  412924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/config.json ...
	I1017 20:14:09.849327  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/config.json: {Name:mk76cc40f98ce1fd9978a490757cb3c468f44416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:09.871651  412924 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:14:09.871677  412924 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:14:09.871699  412924 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:14:09.871731  412924 start.go:360] acquireMachinesLock for kindnet-684669: {Name:mkc6ec4425f15705bbeb59a41d5555bf1ec6bce9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:14:09.871883  412924 start.go:364] duration metric: took 99.802µs to acquireMachinesLock for "kindnet-684669"
	I1017 20:14:09.871912  412924 start.go:93] Provisioning new machine with config: &{Name:kindnet-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-684669 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:14:09.872011  412924 start.go:125] createHost starting for "" (driver="docker")
	I1017 20:14:08.030325  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:08.530979  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:09.030883  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:09.530380  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:10.031005  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:10.530403  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:11.030554  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:11.530982  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:12.031017  407971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:12.117396  407971 kubeadm.go:1113] duration metric: took 4.667382741s to wait for elevateKubeSystemPrivileges
	I1017 20:14:12.117469  407971 kubeadm.go:402] duration metric: took 15.180595473s to StartCluster
	I1017 20:14:12.117497  407971 settings.go:142] acquiring lock: {Name:mka4633fb25e97d0a4c6d64012444d90b7517c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:12.117775  407971 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:14:12.119662  407971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/kubeconfig: {Name:mk8d9127173829548953da47dbc13620240bd291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:12.119978  407971 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:14:12.119976  407971 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:14:12.120063  407971 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:14:12.120180  407971 config.go:182] Loaded profile config "auto-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:12.120187  407971 addons.go:69] Setting storage-provisioner=true in profile "auto-684669"
	I1017 20:14:12.120208  407971 addons.go:238] Setting addon storage-provisioner=true in "auto-684669"
	I1017 20:14:12.120243  407971 host.go:66] Checking if "auto-684669" exists ...
	I1017 20:14:12.120232  407971 addons.go:69] Setting default-storageclass=true in profile "auto-684669"
	I1017 20:14:12.120267  407971 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-684669"
	I1017 20:14:12.120710  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:14:12.120925  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:14:12.122310  407971 out.go:179] * Verifying Kubernetes components...
	I1017 20:14:12.124586  407971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:14:12.145166  407971 addons.go:238] Setting addon default-storageclass=true in "auto-684669"
	I1017 20:14:12.145209  407971 host.go:66] Checking if "auto-684669" exists ...
	I1017 20:14:12.145787  407971 cli_runner.go:164] Run: docker container inspect auto-684669 --format={{.State.Status}}
	I1017 20:14:12.149456  407971 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:14:12.151172  407971 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:14:12.151198  407971 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:14:12.151262  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:14:12.178836  407971 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:14:12.178871  407971 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:14:12.178952  407971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-684669
	I1017 20:14:12.187942  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:14:12.204934  407971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/auto-684669/id_rsa Username:docker}
	I1017 20:14:12.222121  407971 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:14:12.281273  407971 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:14:12.318364  407971 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:14:12.328339  407971 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:14:12.436911  407971 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1017 20:14:12.438501  407971 node_ready.go:35] waiting up to 15m0s for node "auto-684669" to be "Ready" ...
	I1017 20:14:12.692306  407971 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 20:14:12.693790  407971 addons.go:514] duration metric: took 573.72582ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1017 20:14:11.061013  405011 pod_ready.go:104] pod "coredns-66bc5c9577-bsp94" is not "Ready", error: <nil>
	I1017 20:14:11.560899  405011 pod_ready.go:94] pod "coredns-66bc5c9577-bsp94" is "Ready"
	I1017 20:14:11.560941  405011 pod_ready.go:86] duration metric: took 31.006838817s for pod "coredns-66bc5c9577-bsp94" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.563537  405011 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.568264  405011 pod_ready.go:94] pod "etcd-default-k8s-diff-port-563805" is "Ready"
	I1017 20:14:11.568294  405011 pod_ready.go:86] duration metric: took 4.728927ms for pod "etcd-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.570685  405011 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.575344  405011 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-563805" is "Ready"
	I1017 20:14:11.575377  405011 pod_ready.go:86] duration metric: took 4.666923ms for pod "kube-apiserver-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.578056  405011 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.759271  405011 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-563805" is "Ready"
	I1017 20:14:11.759302  405011 pod_ready.go:86] duration metric: took 181.205762ms for pod "kube-controller-manager-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:11.958863  405011 pod_ready.go:83] waiting for pod "kube-proxy-g7749" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:12.358260  405011 pod_ready.go:94] pod "kube-proxy-g7749" is "Ready"
	I1017 20:14:12.358292  405011 pod_ready.go:86] duration metric: took 399.400355ms for pod "kube-proxy-g7749" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:12.560082  405011 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:12.957999  405011 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-563805" is "Ready"
	I1017 20:14:12.958029  405011 pod_ready.go:86] duration metric: took 397.913437ms for pod "kube-scheduler-default-k8s-diff-port-563805" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:12.958040  405011 pod_ready.go:40] duration metric: took 32.408827529s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:14:13.013030  405011 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:14:13.018028  405011 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-563805" cluster and "default" namespace by default
	I1017 20:14:09.874397  412924 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:14:09.874595  412924 start.go:159] libmachine.API.Create for "kindnet-684669" (driver="docker")
	I1017 20:14:09.874625  412924 client.go:168] LocalClient.Create starting
	I1017 20:14:09.874726  412924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem
	I1017 20:14:09.874791  412924 main.go:141] libmachine: Decoding PEM data...
	I1017 20:14:09.874817  412924 main.go:141] libmachine: Parsing certificate...
	I1017 20:14:09.874868  412924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem
	I1017 20:14:09.874887  412924 main.go:141] libmachine: Decoding PEM data...
	I1017 20:14:09.874897  412924 main.go:141] libmachine: Parsing certificate...
	I1017 20:14:09.875219  412924 cli_runner.go:164] Run: docker network inspect kindnet-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:14:09.893105  412924 cli_runner.go:211] docker network inspect kindnet-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:14:09.893186  412924 network_create.go:284] running [docker network inspect kindnet-684669] to gather additional debugging logs...
	I1017 20:14:09.893213  412924 cli_runner.go:164] Run: docker network inspect kindnet-684669
	W1017 20:14:09.910546  412924 cli_runner.go:211] docker network inspect kindnet-684669 returned with exit code 1
	I1017 20:14:09.910578  412924 network_create.go:287] error running [docker network inspect kindnet-684669]: docker network inspect kindnet-684669: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-684669 not found
	I1017 20:14:09.910600  412924 network_create.go:289] output of [docker network inspect kindnet-684669]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-684669 not found
	
	** /stderr **
	I1017 20:14:09.910717  412924 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:14:09.930380  412924 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d34a70da1174 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:b8:c9:c3:2e:b0} reservation:<nil>}
	I1017 20:14:09.931100  412924 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-07edace58173 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:f3:28:2c:52:ce} reservation:<nil>}
	I1017 20:14:09.931858  412924 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a478249e8fe7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:51:65:8d:cb:60} reservation:<nil>}
	I1017 20:14:09.932719  412924 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7ed8ef1bc0a4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:6a:98:d7:e8:28} reservation:<nil>}
	I1017 20:14:09.933070  412924 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9a4aaba57340 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:16:30:99:20:8d:be} reservation:<nil>}
	I1017 20:14:09.933868  412924 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00208e170}
	I1017 20:14:09.933892  412924 network_create.go:124] attempt to create docker network kindnet-684669 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1017 20:14:09.933945  412924 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-684669 kindnet-684669
	I1017 20:14:09.997406  412924 network_create.go:108] docker network kindnet-684669 192.168.94.0/24 created
	I1017 20:14:09.997443  412924 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-684669" container
	I1017 20:14:09.997521  412924 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:14:10.018449  412924 cli_runner.go:164] Run: docker volume create kindnet-684669 --label name.minikube.sigs.k8s.io=kindnet-684669 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:14:10.038413  412924 oci.go:103] Successfully created a docker volume kindnet-684669
	I1017 20:14:10.038499  412924 cli_runner.go:164] Run: docker run --rm --name kindnet-684669-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-684669 --entrypoint /usr/bin/test -v kindnet-684669:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:14:10.457884  412924 oci.go:107] Successfully prepared a docker volume kindnet-684669
	I1017 20:14:10.457935  412924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:14:10.457961  412924 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:14:10.458044  412924 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-684669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 20:14:12.942197  407971 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-684669" context rescaled to 1 replicas
	W1017 20:14:14.442350  407971 node_ready.go:57] node "auto-684669" has "Ready":"False" status (will retry)
	W1017 20:14:16.941556  407971 node_ready.go:57] node "auto-684669" has "Ready":"False" status (will retry)
	I1017 20:14:15.244667  412924 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-684669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.786563684s)
	I1017 20:14:15.244697  412924 kic.go:203] duration metric: took 4.786732459s to extract preloaded images to volume ...
	W1017 20:14:15.244815  412924 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1017 20:14:15.244846  412924 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1017 20:14:15.244879  412924 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:14:15.302251  412924 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-684669 --name kindnet-684669 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-684669 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-684669 --network kindnet-684669 --ip 192.168.94.2 --volume kindnet-684669:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:14:15.616165  412924 cli_runner.go:164] Run: docker container inspect kindnet-684669 --format={{.State.Running}}
	I1017 20:14:15.635817  412924 cli_runner.go:164] Run: docker container inspect kindnet-684669 --format={{.State.Status}}
	I1017 20:14:15.657451  412924 cli_runner.go:164] Run: docker exec kindnet-684669 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:14:15.708928  412924 oci.go:144] the created container "kindnet-684669" has a running status.
	I1017 20:14:15.708971  412924 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa...
	I1017 20:14:15.938780  412924 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:14:15.973031  412924 cli_runner.go:164] Run: docker container inspect kindnet-684669 --format={{.State.Status}}
	I1017 20:14:15.996064  412924 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:14:15.996094  412924 kic_runner.go:114] Args: [docker exec --privileged kindnet-684669 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:14:16.043663  412924 cli_runner.go:164] Run: docker container inspect kindnet-684669 --format={{.State.Status}}
	I1017 20:14:16.065167  412924 machine.go:93] provisionDockerMachine start ...
	I1017 20:14:16.065275  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:16.086589  412924 main.go:141] libmachine: Using SSH client type: native
	I1017 20:14:16.086907  412924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1017 20:14:16.086927  412924 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:14:16.225702  412924 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-684669
	
	I1017 20:14:16.225734  412924 ubuntu.go:182] provisioning hostname "kindnet-684669"
	I1017 20:14:16.225819  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:16.245685  412924 main.go:141] libmachine: Using SSH client type: native
	I1017 20:14:16.245966  412924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1017 20:14:16.245983  412924 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-684669 && echo "kindnet-684669" | sudo tee /etc/hostname
	I1017 20:14:16.394498  412924 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-684669
	
	I1017 20:14:16.394596  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:16.413806  412924 main.go:141] libmachine: Using SSH client type: native
	I1017 20:14:16.414043  412924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1017 20:14:16.414071  412924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-684669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-684669/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-684669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:14:16.554685  412924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:14:16.554720  412924 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-135723/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-135723/.minikube}
	I1017 20:14:16.554784  412924 ubuntu.go:190] setting up certificates
	I1017 20:14:16.554798  412924 provision.go:84] configureAuth start
	I1017 20:14:16.554860  412924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-684669
	I1017 20:14:16.574011  412924 provision.go:143] copyHostCerts
	I1017 20:14:16.574095  412924 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem, removing ...
	I1017 20:14:16.574114  412924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem
	I1017 20:14:16.574200  412924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/ca.pem (1078 bytes)
	I1017 20:14:16.574314  412924 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem, removing ...
	I1017 20:14:16.574338  412924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem
	I1017 20:14:16.574383  412924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/cert.pem (1123 bytes)
	I1017 20:14:16.574477  412924 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem, removing ...
	I1017 20:14:16.574488  412924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem
	I1017 20:14:16.574526  412924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-135723/.minikube/key.pem (1675 bytes)
	I1017 20:14:16.574615  412924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem org=jenkins.kindnet-684669 san=[127.0.0.1 192.168.94.2 kindnet-684669 localhost minikube]
	I1017 20:14:16.675706  412924 provision.go:177] copyRemoteCerts
	I1017 20:14:16.675799  412924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:14:16.675851  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:16.694306  412924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa Username:docker}
	I1017 20:14:16.792239  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1017 20:14:16.813939  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1017 20:14:16.832601  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:14:16.851534  412924 provision.go:87] duration metric: took 296.717779ms to configureAuth
	I1017 20:14:16.851569  412924 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:14:16.851791  412924 config.go:182] Loaded profile config "kindnet-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:14:16.851915  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:16.870507  412924 main.go:141] libmachine: Using SSH client type: native
	I1017 20:14:16.870755  412924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1017 20:14:16.870777  412924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:14:17.125313  412924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:14:17.125338  412924 machine.go:96] duration metric: took 1.060143778s to provisionDockerMachine
	I1017 20:14:17.125351  412924 client.go:171] duration metric: took 7.250719495s to LocalClient.Create
	I1017 20:14:17.125372  412924 start.go:167] duration metric: took 7.250778897s to libmachine.API.Create "kindnet-684669"
	I1017 20:14:17.125381  412924 start.go:293] postStartSetup for "kindnet-684669" (driver="docker")
	I1017 20:14:17.125392  412924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:14:17.125454  412924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:14:17.125503  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:17.144041  412924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa Username:docker}
	I1017 20:14:17.243483  412924 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:14:17.247444  412924 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:14:17.247469  412924 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:14:17.247480  412924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/addons for local assets ...
	I1017 20:14:17.247533  412924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-135723/.minikube/files for local assets ...
	I1017 20:14:17.247621  412924 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem -> 1392172.pem in /etc/ssl/certs
	I1017 20:14:17.247758  412924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:14:17.256386  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:14:17.279237  412924 start.go:296] duration metric: took 153.839594ms for postStartSetup
	I1017 20:14:17.279621  412924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-684669
	I1017 20:14:17.298349  412924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/config.json ...
	I1017 20:14:17.298659  412924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:14:17.298707  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:17.317475  412924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa Username:docker}
	I1017 20:14:17.414249  412924 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:14:17.419272  412924 start.go:128] duration metric: took 7.547241521s to createHost
	I1017 20:14:17.419303  412924 start.go:83] releasing machines lock for "kindnet-684669", held for 7.547404885s
	I1017 20:14:17.419374  412924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-684669
	I1017 20:14:17.437439  412924 ssh_runner.go:195] Run: cat /version.json
	I1017 20:14:17.437493  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:17.437507  412924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:14:17.437564  412924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-684669
	I1017 20:14:17.457561  412924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa Username:docker}
	I1017 20:14:17.457561  412924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/kindnet-684669/id_rsa Username:docker}
	I1017 20:14:17.609556  412924 ssh_runner.go:195] Run: systemctl --version
	I1017 20:14:17.616418  412924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:14:17.653503  412924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:14:17.658767  412924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:14:17.658839  412924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:14:17.688232  412924 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 20:14:17.688260  412924 start.go:495] detecting cgroup driver to use...
	I1017 20:14:17.688297  412924 detect.go:190] detected "systemd" cgroup driver on host os
	I1017 20:14:17.688344  412924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:14:17.706555  412924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:14:17.719818  412924 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:14:17.719872  412924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:14:17.737118  412924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:14:17.756244  412924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:14:17.842189  412924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:14:17.935434  412924 docker.go:234] disabling docker service ...
	I1017 20:14:17.935509  412924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:14:17.956207  412924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:14:17.970490  412924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:14:18.061242  412924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:14:18.148014  412924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:14:18.161296  412924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:14:18.176509  412924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:14:18.176569  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.190097  412924 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1017 20:14:18.190169  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.199733  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.209560  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.218921  412924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:14:18.227606  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.236831  412924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.251330  412924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:14:18.260778  412924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:14:18.269243  412924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:14:18.277263  412924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:14:18.359763  412924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:14:18.469347  412924 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:14:18.469423  412924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:14:18.473692  412924 start.go:563] Will wait 60s for crictl version
	I1017 20:14:18.473790  412924 ssh_runner.go:195] Run: which crictl
	I1017 20:14:18.477582  412924 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:14:18.504980  412924 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:14:18.505060  412924 ssh_runner.go:195] Run: crio --version
	I1017 20:14:18.534168  412924 ssh_runner.go:195] Run: crio --version
	I1017 20:14:18.565081  412924 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:14:18.566563  412924 cli_runner.go:164] Run: docker network inspect kindnet-684669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:14:18.584866  412924 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1017 20:14:18.589128  412924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:14:18.600036  412924 kubeadm.go:883] updating cluster {Name:kindnet-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:14:18.600143  412924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:14:18.600207  412924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:14:18.633892  412924 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:14:18.633913  412924 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:14:18.633959  412924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:14:18.661833  412924 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:14:18.661856  412924 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:14:18.661864  412924 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1017 20:14:18.661949  412924 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-684669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1017 20:14:18.662007  412924 ssh_runner.go:195] Run: crio config
	I1017 20:14:18.710788  412924 cni.go:84] Creating CNI manager for "kindnet"
	I1017 20:14:18.710820  412924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:14:18.710847  412924 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-684669 NodeName:kindnet-684669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:14:18.711000  412924 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-684669"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:14:18.711074  412924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:14:18.719898  412924 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:14:18.719955  412924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:14:18.728188  412924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1017 20:14:18.741671  412924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:14:18.758533  412924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1017 20:14:18.772250  412924 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:14:18.776180  412924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:14:18.786665  412924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:14:18.870764  412924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:14:18.892659  412924 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669 for IP: 192.168.94.2
	I1017 20:14:18.892684  412924 certs.go:195] generating shared ca certs ...
	I1017 20:14:18.892707  412924 certs.go:227] acquiring lock for ca certs: {Name:mk78a17f4b60da022f45e27b806c8fe17998b92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:18.892916  412924 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key
	I1017 20:14:18.892983  412924 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key
	I1017 20:14:18.892997  412924 certs.go:257] generating profile certs ...
	I1017 20:14:18.893077  412924 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.key
	I1017 20:14:18.893103  412924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.crt with IP's: []
	I1017 20:14:19.033448  412924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.crt ...
	I1017 20:14:19.033477  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.crt: {Name:mk2a57d317a69e1a17a17f2649a36a4468e31c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.033656  412924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.key ...
	I1017 20:14:19.033667  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/client.key: {Name:mk27fc367e6992c5aa4115122d8df0c5bdbcea28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.033759  412924 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key.d43a5f48
	I1017 20:14:19.033774  412924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt.d43a5f48 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1017 20:14:19.396349  412924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt.d43a5f48 ...
	I1017 20:14:19.396385  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt.d43a5f48: {Name:mkc94fc19212a4862771e31695dcfb01f79ee99f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.396549  412924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key.d43a5f48 ...
	I1017 20:14:19.396562  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key.d43a5f48: {Name:mka74221e3a37dec5c10e028c66239411e71088c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.396634  412924 certs.go:382] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt.d43a5f48 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt
	I1017 20:14:19.396749  412924 certs.go:386] copying /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key.d43a5f48 -> /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key
	I1017 20:14:19.396821  412924 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.key
	I1017 20:14:19.396839  412924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.crt with IP's: []
	W1017 20:14:18.941857  407971 node_ready.go:57] node "auto-684669" has "Ready":"False" status (will retry)
	W1017 20:14:20.942410  407971 node_ready.go:57] node "auto-684669" has "Ready":"False" status (will retry)
	I1017 20:14:19.701187  412924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.crt ...
	I1017 20:14:19.701217  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.crt: {Name:mk2e7fb78a805d1801962648d2d9cc4926d45b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.701395  412924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.key ...
	I1017 20:14:19.701410  412924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.key: {Name:mk864dc0643cd858464fee4246a0effbe4361716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:14:19.701607  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem (1338 bytes)
	W1017 20:14:19.701653  412924 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217_empty.pem, impossibly tiny 0 bytes
	I1017 20:14:19.701664  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:14:19.701686  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/ca.pem (1078 bytes)
	I1017 20:14:19.701708  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:14:19.701728  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/certs/key.pem (1675 bytes)
	I1017 20:14:19.701799  412924 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem (1708 bytes)
	I1017 20:14:19.702559  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:14:19.721898  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:14:19.740825  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:14:19.759946  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:14:19.779166  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 20:14:19.798773  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:14:19.817157  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:14:19.835563  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/kindnet-684669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:14:19.853974  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/certs/139217.pem --> /usr/share/ca-certificates/139217.pem (1338 bytes)
	I1017 20:14:19.874815  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/ssl/certs/1392172.pem --> /usr/share/ca-certificates/1392172.pem (1708 bytes)
	I1017 20:14:19.893317  412924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-135723/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:14:19.914026  412924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:14:19.927464  412924 ssh_runner.go:195] Run: openssl version
	I1017 20:14:19.933999  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139217.pem && ln -fs /usr/share/ca-certificates/139217.pem /etc/ssl/certs/139217.pem"
	I1017 20:14:19.943149  412924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139217.pem
	I1017 20:14:19.947209  412924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/139217.pem
	I1017 20:14:19.947268  412924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139217.pem
	I1017 20:14:19.982111  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139217.pem /etc/ssl/certs/51391683.0"
	I1017 20:14:19.991603  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1392172.pem && ln -fs /usr/share/ca-certificates/1392172.pem /etc/ssl/certs/1392172.pem"
	I1017 20:14:20.001476  412924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1392172.pem
	I1017 20:14:20.005756  412924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1392172.pem
	I1017 20:14:20.005938  412924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1392172.pem
	I1017 20:14:20.041113  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1392172.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:14:20.050962  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:14:20.060575  412924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:14:20.064868  412924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:14:20.064926  412924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:14:20.099759  412924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:14:20.109788  412924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:14:20.113825  412924 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:14:20.113875  412924 kubeadm.go:400] StartCluster: {Name:kindnet-684669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-684669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:14:20.113936  412924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:14:20.113977  412924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:14:20.145064  412924 cri.go:89] found id: ""
	I1017 20:14:20.145136  412924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:14:20.154049  412924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:14:20.162801  412924 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:14:20.162862  412924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:14:20.171481  412924 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:14:20.171501  412924 kubeadm.go:157] found existing configuration files:
	
	I1017 20:14:20.171541  412924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:14:20.179816  412924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:14:20.179877  412924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:14:20.188277  412924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:14:20.197397  412924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:14:20.197452  412924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:14:20.206296  412924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:14:20.214773  412924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:14:20.214835  412924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:14:20.223121  412924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:14:20.231529  412924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:14:20.231595  412924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:14:20.239822  412924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:14:20.318712  412924 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1017 20:14:20.388635  412924 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1017 20:14:22.942582  407971 node_ready.go:57] node "auto-684669" has "Ready":"False" status (will retry)
	I1017 20:14:23.441617  407971 node_ready.go:49] node "auto-684669" is "Ready"
	I1017 20:14:23.441656  407971 node_ready.go:38] duration metric: took 11.003119715s for node "auto-684669" to be "Ready" ...
	I1017 20:14:23.441673  407971 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:14:23.441733  407971 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:14:23.455702  407971 api_server.go:72] duration metric: took 11.335689053s to wait for apiserver process to appear ...
	I1017 20:14:23.455734  407971 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:14:23.455769  407971 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1017 20:14:23.460131  407971 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1017 20:14:23.461312  407971 api_server.go:141] control plane version: v1.34.1
	I1017 20:14:23.461344  407971 api_server.go:131] duration metric: took 5.590557ms to wait for apiserver health ...
	I1017 20:14:23.461355  407971 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:14:23.464560  407971 system_pods.go:59] 8 kube-system pods found
	I1017 20:14:23.464589  407971 system_pods.go:61] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:14:23.464599  407971 system_pods.go:61] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:23.464607  407971 system_pods.go:61] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:23.464612  407971 system_pods.go:61] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:23.464618  407971 system_pods.go:61] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:23.464623  407971 system_pods.go:61] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:23.464628  407971 system_pods.go:61] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:23.464634  407971 system_pods.go:61] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:14:23.464655  407971 system_pods.go:74] duration metric: took 3.284171ms to wait for pod list to return data ...
	I1017 20:14:23.464667  407971 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:14:23.467272  407971 default_sa.go:45] found service account: "default"
	I1017 20:14:23.467298  407971 default_sa.go:55] duration metric: took 2.624931ms for default service account to be created ...
	I1017 20:14:23.467307  407971 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:14:23.471985  407971 system_pods.go:86] 8 kube-system pods found
	I1017 20:14:23.472028  407971 system_pods.go:89] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:14:23.472038  407971 system_pods.go:89] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:23.472046  407971 system_pods.go:89] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:23.472052  407971 system_pods.go:89] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:23.472064  407971 system_pods.go:89] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:23.472073  407971 system_pods.go:89] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:23.472078  407971 system_pods.go:89] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:23.472085  407971 system_pods.go:89] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:14:23.472111  407971 retry.go:31] will retry after 281.380432ms: missing components: kube-dns
	I1017 20:14:23.758176  407971 system_pods.go:86] 8 kube-system pods found
	I1017 20:14:23.758219  407971 system_pods.go:89] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:14:23.758228  407971 system_pods.go:89] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:23.758237  407971 system_pods.go:89] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:23.758242  407971 system_pods.go:89] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:23.758249  407971 system_pods.go:89] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:23.758260  407971 system_pods.go:89] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:23.758265  407971 system_pods.go:89] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:23.758278  407971 system_pods.go:89] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:14:23.758300  407971 retry.go:31] will retry after 339.214284ms: missing components: kube-dns
	I1017 20:14:24.101645  407971 system_pods.go:86] 8 kube-system pods found
	I1017 20:14:24.101696  407971 system_pods.go:89] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:14:24.101706  407971 system_pods.go:89] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:24.101714  407971 system_pods.go:89] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:24.101719  407971 system_pods.go:89] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:24.101725  407971 system_pods.go:89] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:24.101734  407971 system_pods.go:89] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:24.101773  407971 system_pods.go:89] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:24.101787  407971 system_pods.go:89] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:14:24.101808  407971 retry.go:31] will retry after 359.497927ms: missing components: kube-dns
	I1017 20:14:24.466984  407971 system_pods.go:86] 8 kube-system pods found
	I1017 20:14:24.467036  407971 system_pods.go:89] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:14:24.467046  407971 system_pods.go:89] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:24.467054  407971 system_pods.go:89] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:24.467060  407971 system_pods.go:89] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:24.467067  407971 system_pods.go:89] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:24.467081  407971 system_pods.go:89] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:24.467092  407971 system_pods.go:89] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:24.467100  407971 system_pods.go:89] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:14:24.467123  407971 retry.go:31] will retry after 563.200817ms: missing components: kube-dns
	I1017 20:14:25.036239  407971 system_pods.go:86] 8 kube-system pods found
	I1017 20:14:25.036278  407971 system_pods.go:89] "coredns-66bc5c9577-5qbtt" [81a7206d-a769-47ad-9e2f-d0d0af4c51a7] Running
	I1017 20:14:25.036286  407971 system_pods.go:89] "etcd-auto-684669" [bfbd3250-1bcf-40cc-844f-1f3af66a928e] Running
	I1017 20:14:25.036293  407971 system_pods.go:89] "kindnet-22pt2" [ef6a6112-7fde-468f-b609-6a35a45badd3] Running
	I1017 20:14:25.036298  407971 system_pods.go:89] "kube-apiserver-auto-684669" [002c263a-5150-4b93-ad70-d5e03aaa24a3] Running
	I1017 20:14:25.036314  407971 system_pods.go:89] "kube-controller-manager-auto-684669" [ead7e386-0dd6-4cff-8c31-61cfc8e1c741] Running
	I1017 20:14:25.036320  407971 system_pods.go:89] "kube-proxy-nwck8" [92519eab-a167-402a-ae5d-f4323f73c06e] Running
	I1017 20:14:25.036328  407971 system_pods.go:89] "kube-scheduler-auto-684669" [8bb5861d-204f-43d9-b2d0-510dff5c22c0] Running
	I1017 20:14:25.036333  407971 system_pods.go:89] "storage-provisioner" [fb95060a-e1b8-4ee6-9ef4-3495dce3a0e0] Running
	I1017 20:14:25.036344  407971 system_pods.go:126] duration metric: took 1.569030648s to wait for k8s-apps to be running ...
	I1017 20:14:25.036355  407971 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:14:25.036407  407971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:14:25.055211  407971 system_svc.go:56] duration metric: took 18.842579ms WaitForService to wait for kubelet
	I1017 20:14:25.055247  407971 kubeadm.go:586] duration metric: took 12.935239172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:14:25.055270  407971 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:14:25.058768  407971 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1017 20:14:25.058799  407971 node_conditions.go:123] node cpu capacity is 8
	I1017 20:14:25.058815  407971 node_conditions.go:105] duration metric: took 3.538662ms to run NodePressure ...
	I1017 20:14:25.058834  407971 start.go:241] waiting for startup goroutines ...
	I1017 20:14:25.058845  407971 start.go:246] waiting for cluster config update ...
	I1017 20:14:25.058862  407971 start.go:255] writing updated cluster config ...
	I1017 20:14:25.059198  407971 ssh_runner.go:195] Run: rm -f paused
	I1017 20:14:25.064122  407971 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:14:25.071721  407971 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5qbtt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.078858  407971 pod_ready.go:94] pod "coredns-66bc5c9577-5qbtt" is "Ready"
	I1017 20:14:25.078893  407971 pod_ready.go:86] duration metric: took 7.118956ms for pod "coredns-66bc5c9577-5qbtt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.081975  407971 pod_ready.go:83] waiting for pod "etcd-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.090535  407971 pod_ready.go:94] pod "etcd-auto-684669" is "Ready"
	I1017 20:14:25.090579  407971 pod_ready.go:86] duration metric: took 8.551128ms for pod "etcd-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.094206  407971 pod_ready.go:83] waiting for pod "kube-apiserver-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.099837  407971 pod_ready.go:94] pod "kube-apiserver-auto-684669" is "Ready"
	I1017 20:14:25.099866  407971 pod_ready.go:86] duration metric: took 5.629519ms for pod "kube-apiserver-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.102675  407971 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.469183  407971 pod_ready.go:94] pod "kube-controller-manager-auto-684669" is "Ready"
	I1017 20:14:25.469215  407971 pod_ready.go:86] duration metric: took 366.510428ms for pod "kube-controller-manager-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:25.669005  407971 pod_ready.go:83] waiting for pod "kube-proxy-nwck8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:26.068993  407971 pod_ready.go:94] pod "kube-proxy-nwck8" is "Ready"
	I1017 20:14:26.069025  407971 pod_ready.go:86] duration metric: took 399.993169ms for pod "kube-proxy-nwck8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:26.269553  407971 pod_ready.go:83] waiting for pod "kube-scheduler-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:26.669076  407971 pod_ready.go:94] pod "kube-scheduler-auto-684669" is "Ready"
	I1017 20:14:26.669111  407971 pod_ready.go:86] duration metric: took 399.530008ms for pod "kube-scheduler-auto-684669" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:14:26.669146  407971 pod_ready.go:40] duration metric: took 1.604975691s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:14:26.716526  407971 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 20:14:26.718661  407971 out.go:179] * Done! kubectl is now configured to use "auto-684669" cluster and "default" namespace by default
	I1017 20:14:29.280844  412924 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:14:29.280923  412924 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:14:29.281063  412924 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:14:29.281138  412924 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1017 20:14:29.281194  412924 kubeadm.go:318] OS: Linux
	I1017 20:14:29.281257  412924 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:14:29.281348  412924 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:14:29.281417  412924 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:14:29.281510  412924 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:14:29.281584  412924 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:14:29.281645  412924 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:14:29.281718  412924 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:14:29.281794  412924 kubeadm.go:318] CGROUPS_IO: enabled
	I1017 20:14:29.281885  412924 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:14:29.282023  412924 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:14:29.282168  412924 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:14:29.282253  412924 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 20:14:29.284533  412924 out.go:252]   - Generating certificates and keys ...
	I1017 20:14:29.284625  412924 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:14:29.284717  412924 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:14:29.284822  412924 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:14:29.284896  412924 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:14:29.284971  412924 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:14:29.285033  412924 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:14:29.285096  412924 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:14:29.285258  412924 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-684669 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1017 20:14:29.285335  412924 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:14:29.285549  412924 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-684669 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1017 20:14:29.285641  412924 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:14:29.285733  412924 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:14:29.285826  412924 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:14:29.285903  412924 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:14:29.285977  412924 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:14:29.286068  412924 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:14:29.286144  412924 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:14:29.286200  412924 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:14:29.286282  412924 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:14:29.286389  412924 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:14:29.286479  412924 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:14:29.289038  412924 out.go:252]   - Booting up control plane ...
	I1017 20:14:29.289177  412924 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:14:29.289264  412924 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:14:29.289390  412924 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:14:29.289544  412924 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:14:29.289665  412924 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:14:29.289817  412924 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:14:29.289960  412924 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:14:29.290016  412924 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:14:29.290220  412924 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:14:29.290365  412924 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:14:29.290438  412924 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.730458ms
	I1017 20:14:29.290543  412924 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:14:29.290650  412924 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1017 20:14:29.290809  412924 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:14:29.290937  412924 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 20:14:29.291038  412924 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.076557171s
	I1017 20:14:29.291132  412924 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.713929241s
	I1017 20:14:29.291230  412924 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.502796296s
	I1017 20:14:29.291380  412924 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:14:29.291544  412924 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:14:29.291618  412924 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:14:29.291868  412924 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-684669 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:14:29.291939  412924 kubeadm.go:318] [bootstrap-token] Using token: 3h93g7.a7o7w5qf2vtpthh0
	I1017 20:14:29.294263  412924 out.go:252]   - Configuring RBAC rules ...
	I1017 20:14:29.294425  412924 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:14:29.294543  412924 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:14:29.294729  412924 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:14:29.294948  412924 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:14:29.295104  412924 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:14:29.295193  412924 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:14:29.295347  412924 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:14:29.295422  412924 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:14:29.295495  412924 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:14:29.295508  412924 kubeadm.go:318] 
	I1017 20:14:29.295603  412924 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:14:29.295618  412924 kubeadm.go:318] 
	I1017 20:14:29.295764  412924 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:14:29.295781  412924 kubeadm.go:318] 
	I1017 20:14:29.295821  412924 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:14:29.295919  412924 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:14:29.295994  412924 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:14:29.296004  412924 kubeadm.go:318] 
	I1017 20:14:29.296097  412924 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:14:29.296112  412924 kubeadm.go:318] 
	I1017 20:14:29.296183  412924 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:14:29.296195  412924 kubeadm.go:318] 
	I1017 20:14:29.296272  412924 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:14:29.296383  412924 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:14:29.296482  412924 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:14:29.296492  412924 kubeadm.go:318] 
	I1017 20:14:29.296601  412924 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:14:29.296730  412924 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:14:29.296750  412924 kubeadm.go:318] 
	I1017 20:14:29.296875  412924 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 3h93g7.a7o7w5qf2vtpthh0 \
	I1017 20:14:29.297049  412924 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 \
	I1017 20:14:29.297106  412924 kubeadm.go:318] 	--control-plane 
	I1017 20:14:29.297112  412924 kubeadm.go:318] 
	I1017 20:14:29.297253  412924 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:14:29.297271  412924 kubeadm.go:318] 
	I1017 20:14:29.297392  412924 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 3h93g7.a7o7w5qf2vtpthh0 \
	I1017 20:14:29.297542  412924 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b3b7270723494197b169f0036043b6353e7d1ca49959b4b8f2058b5940851f5 
	I1017 20:14:29.297564  412924 cni.go:84] Creating CNI manager for "kindnet"
	I1017 20:14:29.299968  412924 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:14:29.301782  412924 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:14:29.307481  412924 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:14:29.307503  412924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:14:29.325729  412924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:14:29.577446  412924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:14:29.577605  412924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:14:29.577658  412924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-684669 minikube.k8s.io/updated_at=2025_10_17T20_14_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=kindnet-684669 minikube.k8s.io/primary=true
	I1017 20:14:29.595598  412924 ops.go:34] apiserver oom_adj: -16
	
	
	==> CRI-O <==
	Oct 17 20:13:51 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:13:51.505300186Z" level=info msg="Started container" PID=1740 containerID=06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper id=0b9d35e2-b997-47f4-b2f8-922d4a4ef785 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a33e30f01196a37b95050b7072f0f3034337c96f365dc0cd1e80d2fa9406929f
	Oct 17 20:13:52 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:13:52.465841333Z" level=info msg="Removing container: ae50bbd6796debe87db9fa46ef2949d3d8e26fb48382392d370f79e77a535888" id=f4865a32-1e07-4b3d-90c4-e31414f2b8e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:13:52 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:13:52.476665181Z" level=info msg="Removed container ae50bbd6796debe87db9fa46ef2949d3d8e26fb48382392d370f79e77a535888: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper" id=f4865a32-1e07-4b3d-90c4-e31414f2b8e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.517799112Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25b6cfa8-b2f4-4185-9ae2-d0eab1eabc18 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.518820563Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4252307b-55e9-4a7f-8391-4ffe4b887106 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.520111447Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f831c736-6f99-4acf-ad66-76d82d61f2f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.52041544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.526587145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.526783562Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/293fbde659b1c1854edb8e09c88e01caa25930e20fecbc9f95f33400cfec2a0b/merged/etc/passwd: no such file or directory"
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.52681169Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/293fbde659b1c1854edb8e09c88e01caa25930e20fecbc9f95f33400cfec2a0b/merged/etc/group: no such file or directory"
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.527642119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.559079418Z" level=info msg="Created container a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd: kube-system/storage-provisioner/storage-provisioner" id=f831c736-6f99-4acf-ad66-76d82d61f2f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.560218615Z" level=info msg="Starting container: a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd" id=abb6d03e-716a-4ee0-8cde-3f72d6518815 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:14:10 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:10.562956607Z" level=info msg="Started container" PID=1755 containerID=a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd description=kube-system/storage-provisioner/storage-provisioner id=abb6d03e-716a-4ee0-8cde-3f72d6518815 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4d35746c39f30639b455f08c950558a3d3a4ae1b1f0f4b06f3389a62031478d
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.390531742Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=889aff3d-adea-49c4-8f3c-db6bc3eb808d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.391593035Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=de0476f0-340e-4eba-b37f-80eff7a7a072 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.392799403Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper" id=eb09e89a-ed1f-4b33-8a05-e1a822ca1446 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.393102511Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.399152584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.399864725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.42827861Z" level=info msg="Created container d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper" id=eb09e89a-ed1f-4b33-8a05-e1a822ca1446 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.429102666Z" level=info msg="Starting container: d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2" id=335fb069-6ba5-481d-8a3f-4edc2b8b805c name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.431436495Z" level=info msg="Started container" PID=1789 containerID=d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper id=335fb069-6ba5-481d-8a3f-4edc2b8b805c name=/runtime.v1.RuntimeService/StartContainer sandboxID=a33e30f01196a37b95050b7072f0f3034337c96f365dc0cd1e80d2fa9406929f
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.535914705Z" level=info msg="Removing container: 06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af" id=3cd07fd8-b355-4324-98ca-46e1b003ee69 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:14:15 default-k8s-diff-port-563805 crio[558]: time="2025-10-17T20:14:15.547581232Z" level=info msg="Removed container 06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9/dashboard-metrics-scraper" id=3cd07fd8-b355-4324-98ca-46e1b003ee69 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	d1658a45187f3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   a33e30f01196a       dashboard-metrics-scraper-6ffb444bf9-lh7m9             kubernetes-dashboard
	a7cd25c03695c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   a4d35746c39f3       storage-provisioner                                    kube-system
	dec17f1d9027d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   3e9afcfc07afb       kubernetes-dashboard-855c9754f9-cfv55                  kubernetes-dashboard
	620478bbf7c35       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   8d01956ed30b0       coredns-66bc5c9577-bsp94                               kube-system
	6f2500593565c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   03750cd46c13f       busybox                                                default
	c4a95fedc4957       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   37e62db286f1b       kindnet-gzsxs                                          kube-system
	befec0b605a11       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   e2464514adfa6       kube-proxy-g7749                                       kube-system
	f6fedb384a1ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   a4d35746c39f3       storage-provisioner                                    kube-system
	c595776216f07       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   9a8c0bb72d31e       kube-apiserver-default-k8s-diff-port-563805            kube-system
	8b04285c22247       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   78b31eee621aa       etcd-default-k8s-diff-port-563805                      kube-system
	3921f3f537505       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   ed08be654bd73       kube-scheduler-default-k8s-diff-port-563805            kube-system
	304a87295c1b6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   d5d538f3961cd       kube-controller-manager-default-k8s-diff-port-563805   kube-system
	
	
	==> coredns [620478bbf7c357ce43fdb113d1af8b156c3f06537ebbde3f375835b749f63165] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42205 - 26352 "HINFO IN 3984473532376090302.5320220858447455705. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.102008098s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-563805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-563805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=default-k8s-diff-port-563805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_12_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:12:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-563805
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:14:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:14:09 +0000   Fri, 17 Oct 2025 20:12:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:14:09 +0000   Fri, 17 Oct 2025 20:12:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:14:09 +0000   Fri, 17 Oct 2025 20:12:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:14:09 +0000   Fri, 17 Oct 2025 20:12:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-563805
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8216883e-3ed5-4f7d-8ef7-444b758f4457
	  Boot ID:                    5be2552e-7324-47ee-95d1-29e555191ce0
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-bsp94                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-default-k8s-diff-port-563805                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-gzsxs                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-563805             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-563805    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-g7749                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-563805             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lh7m9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cfv55                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           107s               node-controller  Node default-k8s-diff-port-563805 event: Registered Node default-k8s-diff-port-563805 in Controller
	  Normal  NodeReady                95s                kubelet          Node default-k8s-diff-port-563805 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-563805 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node default-k8s-diff-port-563805 event: Registered Node default-k8s-diff-port-563805 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 8a eb a7 ac b4 08 06
	[  +6.673587] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 83 8b 2b d5 4b 08 06
	[Oct17 19:28] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.025928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023873] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.023920] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.024844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +1.022888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +2.047796] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[  +4.031595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[Oct17 19:29] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +16.382540] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	[ +32.254198] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a fe de 02 34 a2 b6 a4 ca 85 3e c2 08 00
	
	
	==> etcd [8b04285c222479d3b2ea10ca1123a4893d4e6350366905f40c907646a9f3259c] <==
	{"level":"warn","ts":"2025-10-17T20:13:38.111187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.118872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.127356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.136539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.143993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.151600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.159885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.174024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.177858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.193600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:13:38.252260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54006","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:13:46.561982Z","caller":"traceutil/trace.go:172","msg":"trace[2065924850] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"159.978583ms","start":"2025-10-17T20:13:46.401970Z","end":"2025-10-17T20:13:46.561948Z","steps":["trace[2065924850] 'process raft request'  (duration: 101.858557ms)","trace[2065924850] 'compare'  (duration: 58.018754ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:47.256659Z","caller":"traceutil/trace.go:172","msg":"trace[511270124] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"130.623243ms","start":"2025-10-17T20:13:47.126013Z","end":"2025-10-17T20:13:47.256636Z","steps":["trace[511270124] 'process raft request'  (duration: 128.859822ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T20:13:47.418573Z","caller":"traceutil/trace.go:172","msg":"trace[1494539764] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"156.777886ms","start":"2025-10-17T20:13:47.261770Z","end":"2025-10-17T20:13:47.418548Z","steps":["trace[1494539764] 'process raft request'  (duration: 135.264492ms)","trace[1494539764] 'compare'  (duration: 21.390588ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:47.565673Z","caller":"traceutil/trace.go:172","msg":"trace[951471414] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"142.086255ms","start":"2025-10-17T20:13:47.423556Z","end":"2025-10-17T20:13:47.565643Z","steps":["trace[951471414] 'process raft request'  (duration: 127.271842ms)","trace[951471414] 'compare'  (duration: 14.405914ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:47.715526Z","caller":"traceutil/trace.go:172","msg":"trace[1498324144] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"144.592763ms","start":"2025-10-17T20:13:47.570911Z","end":"2025-10-17T20:13:47.715503Z","steps":["trace[1498324144] 'process raft request'  (duration: 117.083871ms)","trace[1498324144] 'compare'  (duration: 27.376489ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:47.987017Z","caller":"traceutil/trace.go:172","msg":"trace[852530649] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"177.121462ms","start":"2025-10-17T20:13:47.809866Z","end":"2025-10-17T20:13:47.986988Z","steps":["trace[852530649] 'process raft request'  (duration: 123.154334ms)","trace[852530649] 'compare'  (duration: 53.604575ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T20:13:48.343987Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.498836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-bsp94\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-10-17T20:13:48.344172Z","caller":"traceutil/trace.go:172","msg":"trace[903936769] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-bsp94; range_end:; response_count:1; response_revision:526; }","duration":"287.745049ms","start":"2025-10-17T20:13:48.056408Z","end":"2025-10-17T20:13:48.344153Z","steps":["trace[903936769] 'agreement among raft nodes before linearized reading'  (duration: 72.904648ms)","trace[903936769] 'range keys from in-memory index tree'  (duration: 214.473644ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T20:13:48.344780Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"214.669085ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596442982395777 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" mod_revision:524 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" value_size:690 lease:499224406127619844 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-17T20:13:48.344874Z","caller":"traceutil/trace.go:172","msg":"trace[1296264633] transaction","detail":"{read_only:false; response_revision:527; number_of_response:1; }","duration":"345.012482ms","start":"2025-10-17T20:13:47.999848Z","end":"2025-10-17T20:13:48.344861Z","steps":["trace[1296264633] 'process raft request'  (duration: 129.470485ms)","trace[1296264633] 'compare'  (duration: 214.5254ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T20:13:48.344934Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T20:13:47.999814Z","time spent":"345.082821ms","remote":"127.0.0.1:53024","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":778,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" mod_revision:524 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" value_size:690 lease:499224406127619844 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-563805.186f6070acd65743\" > >"}
	{"level":"warn","ts":"2025-10-17T20:13:48.586654Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.786997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T20:13:48.586726Z","caller":"traceutil/trace.go:172","msg":"trace[952525593] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:530; }","duration":"143.865424ms","start":"2025-10-17T20:13:48.442844Z","end":"2025-10-17T20:13:48.586709Z","steps":["trace[952525593] 'agreement among raft nodes before linearized reading'  (duration: 84.098089ms)","trace[952525593] 'range keys from in-memory index tree'  (duration: 59.659352ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T20:13:48.586824Z","caller":"traceutil/trace.go:172","msg":"trace[458332569] transaction","detail":"{read_only:false; response_revision:531; number_of_response:1; }","duration":"183.909471ms","start":"2025-10-17T20:13:48.402893Z","end":"2025-10-17T20:13:48.586803Z","steps":["trace[458332569] 'process raft request'  (duration: 124.087536ms)","trace[458332569] 'compare'  (duration: 59.607168ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:14:30 up  1:56,  0 user,  load average: 4.41, 4.64, 3.02
	Linux default-k8s-diff-port-563805 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c4a95fedc4957f2772d4188de75f2d0b0715d0ead81d66093c1bb82a882026d5] <==
	I1017 20:13:39.999637       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:13:39.999943       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 20:13:40.000155       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:13:40.000173       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:13:40.000198       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:13:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:13:40.296948       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:13:40.297075       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:13:40.297096       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:13:40.297282       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:13:40.697318       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:13:40.697353       1 metrics.go:72] Registering metrics
	I1017 20:13:40.697469       1 controller.go:711] "Syncing nftables rules"
	I1017 20:13:50.259344       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:13:50.259424       1 main.go:301] handling current node
	I1017 20:14:00.259035       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:14:00.259086       1 main.go:301] handling current node
	I1017 20:14:10.259313       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:14:10.259368       1 main.go:301] handling current node
	I1017 20:14:20.258797       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:14:20.258831       1 main.go:301] handling current node
	I1017 20:14:30.266894       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:14:30.266927       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c595776216f076fd092a3194172be36c923143b82bc0c107305659b192166d72] <==
	I1017 20:13:38.909038       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:13:38.910872       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:13:38.910892       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:13:38.910900       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:13:38.910907       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:13:38.911191       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:13:38.911356       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:13:38.920843       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:13:38.921785       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1017 20:13:38.923555       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:13:38.967420       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:13:38.976310       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 20:13:39.317594       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:13:39.350171       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:13:39.375891       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:13:39.384873       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:13:39.394460       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:13:39.435666       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.223.121"}
	I1017 20:13:39.446767       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.245.135"}
	I1017 20:13:39.808466       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:13:42.456710       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:13:42.855723       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:13:42.855723       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:13:42.906127       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:13:42.906127       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [304a87295c1b69a58634803b264b8f89d380003a2081fe68a13fad1c6406af7c] <==
	I1017 20:13:42.302789       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:13:42.302891       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-563805"
	I1017 20:13:42.302948       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:13:42.303639       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:13:42.305966       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:13:42.308311       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:13:42.308660       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:13:42.323806       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:13:42.327147       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:13:42.332288       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:13:42.337594       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:13:42.339924       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:13:42.345272       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:13:42.345295       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:13:42.345316       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:13:42.349638       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:13:42.351661       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:13:42.352723       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:13:42.352787       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:13:42.353957       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:13:42.358481       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:13:42.358534       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:13:42.358573       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:13:42.358586       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:13:42.358593       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	
	
	==> kube-proxy [befec0b605a11944db3aa5e1626c300e786a26bec9be6f5bef7d94439e2b74cd] <==
	I1017 20:13:39.798668       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:13:39.860824       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:13:39.960961       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:13:39.961001       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 20:13:39.961107       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:13:39.986277       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:13:39.986341       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:13:39.993539       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:13:39.994035       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:13:39.994074       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:39.998297       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:13:39.998326       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:13:39.998365       1 config.go:200] "Starting service config controller"
	I1017 20:13:39.998372       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:13:39.998399       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:13:39.998397       1 config.go:309] "Starting node config controller"
	I1017 20:13:39.998412       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:13:39.998419       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:13:39.998405       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:13:40.098592       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:13:40.098557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:13:40.098691       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3921f3f5375050e83141087f7f8ca522220b109c30ad4b4d1d6c09216bc51b9b] <==
	I1017 20:13:38.032136       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:13:39.229535       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:13:39.229566       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:13:39.235471       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:13:39.235465       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:13:39.235519       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:13:39.235477       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:39.235587       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:39.235519       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:13:39.236559       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:13:39.236653       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:13:39.336912       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 20:13:39.336979       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:13:39.336919       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:13:42 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:42.901459     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8cb77f18-44bb-401c-b230-621ccb6ff4a4-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cfv55\" (UID: \"8cb77f18-44bb-401c-b230-621ccb6ff4a4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cfv55"
	Oct 17 20:13:42 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:42.901507     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zpbp\" (UniqueName: \"kubernetes.io/projected/2765cd74-c48c-40ad-8ac1-fb1a758dcd41-kube-api-access-5zpbp\") pod \"dashboard-metrics-scraper-6ffb444bf9-lh7m9\" (UID: \"2765cd74-c48c-40ad-8ac1-fb1a758dcd41\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9"
	Oct 17 20:13:42 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:42.901526     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2765cd74-c48c-40ad-8ac1-fb1a758dcd41-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-lh7m9\" (UID: \"2765cd74-c48c-40ad-8ac1-fb1a758dcd41\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9"
	Oct 17 20:13:42 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:42.901632     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzgr4\" (UniqueName: \"kubernetes.io/projected/8cb77f18-44bb-401c-b230-621ccb6ff4a4-kube-api-access-kzgr4\") pod \"kubernetes-dashboard-855c9754f9-cfv55\" (UID: \"8cb77f18-44bb-401c-b230-621ccb6ff4a4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cfv55"
	Oct 17 20:13:49 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:49.474343     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cfv55" podStartSLOduration=1.843655536 podStartE2EDuration="7.474309742s" podCreationTimestamp="2025-10-17 20:13:42 +0000 UTC" firstStartedPulling="2025-10-17 20:13:43.115060649 +0000 UTC m=+6.833479908" lastFinishedPulling="2025-10-17 20:13:48.745714842 +0000 UTC m=+12.464134114" observedRunningTime="2025-10-17 20:13:49.474296413 +0000 UTC m=+13.192715693" watchObservedRunningTime="2025-10-17 20:13:49.474309742 +0000 UTC m=+13.192729022"
	Oct 17 20:13:51 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:51.459428     722 scope.go:117] "RemoveContainer" containerID="ae50bbd6796debe87db9fa46ef2949d3d8e26fb48382392d370f79e77a535888"
	Oct 17 20:13:52 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:52.464184     722 scope.go:117] "RemoveContainer" containerID="ae50bbd6796debe87db9fa46ef2949d3d8e26fb48382392d370f79e77a535888"
	Oct 17 20:13:52 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:52.464492     722 scope.go:117] "RemoveContainer" containerID="06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af"
	Oct 17 20:13:52 default-k8s-diff-port-563805 kubelet[722]: E1017 20:13:52.464728     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lh7m9_kubernetes-dashboard(2765cd74-c48c-40ad-8ac1-fb1a758dcd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9" podUID="2765cd74-c48c-40ad-8ac1-fb1a758dcd41"
	Oct 17 20:13:53 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:53.469621     722 scope.go:117] "RemoveContainer" containerID="06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af"
	Oct 17 20:13:53 default-k8s-diff-port-563805 kubelet[722]: E1017 20:13:53.469798     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lh7m9_kubernetes-dashboard(2765cd74-c48c-40ad-8ac1-fb1a758dcd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9" podUID="2765cd74-c48c-40ad-8ac1-fb1a758dcd41"
	Oct 17 20:13:59 default-k8s-diff-port-563805 kubelet[722]: I1017 20:13:59.893906     722 scope.go:117] "RemoveContainer" containerID="06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af"
	Oct 17 20:13:59 default-k8s-diff-port-563805 kubelet[722]: E1017 20:13:59.894128     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lh7m9_kubernetes-dashboard(2765cd74-c48c-40ad-8ac1-fb1a758dcd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9" podUID="2765cd74-c48c-40ad-8ac1-fb1a758dcd41"
	Oct 17 20:14:10 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:10.517317     722 scope.go:117] "RemoveContainer" containerID="f6fedb384a1ad00b57204bbb8a84f0877c763ba980fe5fe9bdd6d9fd495b8981"
	Oct 17 20:14:15 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:15.389885     722 scope.go:117] "RemoveContainer" containerID="06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af"
	Oct 17 20:14:15 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:15.534428     722 scope.go:117] "RemoveContainer" containerID="06bf105b13a5e3e05b34c0dc97cb9ca6ea813749ca62438aceff8d13766b68af"
	Oct 17 20:14:15 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:15.534690     722 scope.go:117] "RemoveContainer" containerID="d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2"
	Oct 17 20:14:15 default-k8s-diff-port-563805 kubelet[722]: E1017 20:14:15.535112     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lh7m9_kubernetes-dashboard(2765cd74-c48c-40ad-8ac1-fb1a758dcd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9" podUID="2765cd74-c48c-40ad-8ac1-fb1a758dcd41"
	Oct 17 20:14:19 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:19.893924     722 scope.go:117] "RemoveContainer" containerID="d1658a45187f31803ade97f98ac1b8a655c6108d7988974256627f6a935f98f2"
	Oct 17 20:14:19 default-k8s-diff-port-563805 kubelet[722]: E1017 20:14:19.894116     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lh7m9_kubernetes-dashboard(2765cd74-c48c-40ad-8ac1-fb1a758dcd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lh7m9" podUID="2765cd74-c48c-40ad-8ac1-fb1a758dcd41"
	Oct 17 20:14:25 default-k8s-diff-port-563805 kubelet[722]: I1017 20:14:25.218040     722 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 17 20:14:25 default-k8s-diff-port-563805 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:14:25 default-k8s-diff-port-563805 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:14:25 default-k8s-diff-port-563805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 17 20:14:25 default-k8s-diff-port-563805 systemd[1]: kubelet.service: Consumed 1.735s CPU time.
	
	
	==> kubernetes-dashboard [dec17f1d9027dfa31aeaa2dc6ea73f5f3ea06821f779ca9a7b446e04d0051274] <==
	2025/10/17 20:13:48 Starting overwatch
	2025/10/17 20:13:48 Using namespace: kubernetes-dashboard
	2025/10/17 20:13:48 Using in-cluster config to connect to apiserver
	2025/10/17 20:13:48 Using secret token for csrf signing
	2025/10/17 20:13:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:13:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:13:48 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:13:48 Generating JWE encryption key
	2025/10/17 20:13:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:13:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:13:49 Initializing JWE encryption key from synchronized object
	2025/10/17 20:13:49 Creating in-cluster Sidecar client
	2025/10/17 20:13:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:13:49 Serving insecurely on HTTP port: 9090
	2025/10/17 20:14:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a7cd25c03695ca30218da14c7e94f11aaa2d7d8a98ccd3f06cff2c1dad0922bd] <==
	I1017 20:14:10.578160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:14:10.588213       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:14:10.588375       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:14:10.598930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:14.055938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:18.316083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:21.915060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:24.969106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:27.994723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:28.005109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:14:28.005610       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:14:28.005710       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7cbc6369-13f8-42ff-8d5e-a08248991cf2", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-563805_09088fe6-096b-44dc-b2af-2ca91919bacd became leader
	I1017 20:14:28.005993       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-563805_09088fe6-096b-44dc-b2af-2ca91919bacd!
	W1017 20:14:28.014824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:28.028191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:14:28.106871       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-563805_09088fe6-096b-44dc-b2af-2ca91919bacd!
	W1017 20:14:30.032417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:14:30.038665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f6fedb384a1ad00b57204bbb8a84f0877c763ba980fe5fe9bdd6d9fd495b8981] <==
	I1017 20:13:39.769272       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:14:09.774135       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805: exit status 2 (345.834354ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-563805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.60s)

                                                
                                    

Test pass (264/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.44
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 5.3
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.42
21 TestBinaryMirror 0.83
22 TestOffline 63.87
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 150.74
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 8.46
48 TestAddons/StoppedEnableDisable 18.62
49 TestCertOptions 28.47
50 TestCertExpiration 214.34
52 TestForceSystemdFlag 26.93
53 TestForceSystemdEnv 31.52
55 TestKVMDriverInstallOrUpdate 0.82
59 TestErrorSpam/setup 25.01
60 TestErrorSpam/start 0.68
61 TestErrorSpam/status 0.97
62 TestErrorSpam/pause 6.7
63 TestErrorSpam/unpause 5.75
64 TestErrorSpam/stop 8.13
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 39.89
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.42
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.13
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.7
76 TestFunctional/serial/CacheCmd/cache/add_local 1.65
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 75.5
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.34
87 TestFunctional/serial/LogsFileCmd 1.36
88 TestFunctional/serial/InvalidService 4.25
90 TestFunctional/parallel/ConfigCmd 0.35
91 TestFunctional/parallel/DashboardCmd 8.14
92 TestFunctional/parallel/DryRun 0.38
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.95
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 26.42
102 TestFunctional/parallel/SSHCmd 0.63
103 TestFunctional/parallel/CpCmd 1.77
104 TestFunctional/parallel/MySQL 17.19
105 TestFunctional/parallel/FileSync 0.29
106 TestFunctional/parallel/CertSync 1.83
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
114 TestFunctional/parallel/License 0.41
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
121 TestFunctional/parallel/ImageCommands/ImageListYaml 1.28
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.4
123 TestFunctional/parallel/ImageCommands/Setup 1.62
124 TestFunctional/parallel/Version/short 0.07
125 TestFunctional/parallel/Version/components 0.58
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.24
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
138 TestFunctional/parallel/MountCmd/any-port 8.91
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/MountCmd/specific-port 2.04
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
149 TestFunctional/parallel/ProfileCmd/profile_list 0.4
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
151 TestFunctional/parallel/ServiceCmd/List 1.76
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 141.93
164 TestMultiControlPlane/serial/DeployApp 4.85
165 TestMultiControlPlane/serial/PingHostFromPods 0.99
166 TestMultiControlPlane/serial/AddWorkerNode 24.33
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
169 TestMultiControlPlane/serial/CopyFile 17.02
170 TestMultiControlPlane/serial/StopSecondaryNode 19.82
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
172 TestMultiControlPlane/serial/RestartSecondaryNode 9.13
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 107.56
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.63
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
177 TestMultiControlPlane/serial/StopCluster 46.6
178 TestMultiControlPlane/serial/RestartCluster 54.29
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
180 TestMultiControlPlane/serial/AddSecondaryNode 34.34
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
185 TestJSONOutput/start/Command 39.35
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 8
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 28.7
211 TestKicCustomNetwork/use_default_bridge_network 24.15
212 TestKicExistingNetwork 25.83
213 TestKicCustomSubnet 24.79
214 TestKicStaticIP 27.55
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 48.43
219 TestMountStart/serial/StartWithMountFirst 5.89
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 5.46
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.27
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 91.98
231 TestMultiNode/serial/DeployApp2Nodes 4.39
232 TestMultiNode/serial/PingHostFrom2Pods 0.68
233 TestMultiNode/serial/AddNode 54.07
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.67
236 TestMultiNode/serial/CopyFile 9.88
237 TestMultiNode/serial/StopNode 2.3
238 TestMultiNode/serial/StartAfterStop 7.4
239 TestMultiNode/serial/RestartKeepsNodes 80.33
240 TestMultiNode/serial/DeleteNode 5.3
241 TestMultiNode/serial/StopMultiNode 28.63
242 TestMultiNode/serial/RestartMultiNode 44.81
243 TestMultiNode/serial/ValidateNameConflict 25.31
248 TestPreload 109.87
250 TestScheduledStopUnix 97.14
253 TestInsufficientStorage 10.22
254 TestRunningBinaryUpgrade 58.14
256 TestKubernetesUpgrade 400.29
257 TestMissingContainerUpgrade 76.94
258 TestStoppedBinaryUpgrade/Setup 0.52
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
269 TestPause/serial/Start 57
270 TestNoKubernetes/serial/StartWithK8s 38.31
271 TestStoppedBinaryUpgrade/Upgrade 57.71
272 TestNoKubernetes/serial/StartWithStopK8s 17.56
273 TestNoKubernetes/serial/Start 5.47
274 TestPause/serial/SecondStartNoReconfiguration 6.66
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
277 TestNoKubernetes/serial/ProfileList 3.14
279 TestNoKubernetes/serial/Stop 2.71
280 TestNoKubernetes/serial/StartNoArgs 6.84
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
289 TestNetworkPlugins/group/false 4.1
294 TestStartStop/group/old-k8s-version/serial/FirstStart 50.58
296 TestStartStop/group/no-preload/serial/FirstStart 51.21
297 TestStartStop/group/old-k8s-version/serial/DeployApp 8.35
299 TestStartStop/group/old-k8s-version/serial/Stop 16.03
300 TestStartStop/group/no-preload/serial/DeployApp 8.24
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
302 TestStartStop/group/old-k8s-version/serial/SecondStart 44.33
304 TestStartStop/group/no-preload/serial/Stop 16.19
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
306 TestStartStop/group/no-preload/serial/SecondStart 49.19
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
312 TestStartStop/group/embed-certs/serial/FirstStart 42.11
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.41
320 TestStartStop/group/newest-cni/serial/FirstStart 27.95
321 TestStartStop/group/embed-certs/serial/DeployApp 8.29
323 TestStartStop/group/embed-certs/serial/Stop 18.12
324 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/Stop 2.51
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
328 TestStartStop/group/newest-cni/serial/SecondStart 10.86
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
330 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
331 TestStartStop/group/embed-certs/serial/SecondStart 44.38
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.8
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 43.86
340 TestNetworkPlugins/group/auto/Start 43.92
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
345 TestNetworkPlugins/group/kindnet/Start 38.13
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
350 TestNetworkPlugins/group/auto/KubeletFlags 0.31
351 TestNetworkPlugins/group/auto/NetCatPod 9.23
352 TestNetworkPlugins/group/calico/Start 52.68
353 TestNetworkPlugins/group/auto/DNS 0.12
354 TestNetworkPlugins/group/auto/Localhost 0.1
355 TestNetworkPlugins/group/auto/HairPin 0.11
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
358 TestNetworkPlugins/group/kindnet/NetCatPod 8.2
359 TestNetworkPlugins/group/custom-flannel/Start 56.96
360 TestNetworkPlugins/group/kindnet/DNS 0.15
361 TestNetworkPlugins/group/kindnet/Localhost 0.13
362 TestNetworkPlugins/group/kindnet/HairPin 0.16
363 TestNetworkPlugins/group/enable-default-cni/Start 65.73
364 TestNetworkPlugins/group/flannel/Start 48.35
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/KubeletFlags 0.39
367 TestNetworkPlugins/group/calico/NetCatPod 9.29
368 TestNetworkPlugins/group/calico/DNS 0.13
369 TestNetworkPlugins/group/calico/Localhost 0.11
370 TestNetworkPlugins/group/calico/HairPin 0.11
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.25
373 TestNetworkPlugins/group/custom-flannel/DNS 0.12
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
376 TestNetworkPlugins/group/bridge/Start 36.69
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.54
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
384 TestNetworkPlugins/group/flannel/NetCatPod 8.22
385 TestNetworkPlugins/group/flannel/DNS 0.13
386 TestNetworkPlugins/group/flannel/Localhost 0.09
387 TestNetworkPlugins/group/flannel/HairPin 0.1
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
389 TestNetworkPlugins/group/bridge/NetCatPod 8.2
390 TestNetworkPlugins/group/bridge/DNS 0.11
391 TestNetworkPlugins/group/bridge/Localhost 0.09
392 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (4.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-219122 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-219122 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.441618775s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1017 19:25:25.927286  139217 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1017 19:25:25.927387  139217 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-219122
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-219122: exit status 85 (66.312818ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-219122 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-219122 │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:25:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:25:21.528173  139229 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:25:21.528439  139229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:25:21.528449  139229 out.go:374] Setting ErrFile to fd 2...
	I1017 19:25:21.528454  139229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:25:21.528718  139229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	W1017 19:25:21.528860  139229 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21664-135723/.minikube/config/config.json: open /home/jenkins/minikube-integration/21664-135723/.minikube/config/config.json: no such file or directory
	I1017 19:25:21.529411  139229 out.go:368] Setting JSON to true
	I1017 19:25:21.530332  139229 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4070,"bootTime":1760725052,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:25:21.530428  139229 start.go:141] virtualization: kvm guest
	I1017 19:25:21.533625  139229 out.go:99] [download-only-219122] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:25:21.533793  139229 notify.go:220] Checking for updates...
	W1017 19:25:21.533813  139229 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball: no such file or directory
	I1017 19:25:21.535167  139229 out.go:171] MINIKUBE_LOCATION=21664
	I1017 19:25:21.536784  139229 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:25:21.538306  139229 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 19:25:21.539629  139229 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 19:25:21.540928  139229 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1017 19:25:21.543764  139229 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 19:25:21.544020  139229 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:25:21.566888  139229 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:25:21.566967  139229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:25:21.623391  139229 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-17 19:25:21.612943648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:25:21.623497  139229 docker.go:318] overlay module found
	I1017 19:25:21.625502  139229 out.go:99] Using the docker driver based on user configuration
	I1017 19:25:21.625532  139229 start.go:305] selected driver: docker
	I1017 19:25:21.625540  139229 start.go:925] validating driver "docker" against <nil>
	I1017 19:25:21.625637  139229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:25:21.683523  139229 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-17 19:25:21.672352801 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:25:21.683729  139229 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:25:21.684348  139229 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1017 19:25:21.684494  139229 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 19:25:21.686403  139229 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-219122 host does not exist
	  To start a cluster, run: "minikube start -p download-only-219122"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-219122
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-893455 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-893455 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.301203862s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1017 19:25:31.652599  139217 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1017 19:25:31.652644  139217 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-893455
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-893455: exit status 85 (65.747412ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-219122 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-219122 │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ delete  │ -p download-only-219122                                                                                                                                                   │ download-only-219122 │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ start   │ -o=json --download-only -p download-only-893455 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-893455 │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:25:26
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:25:26.392618  139582 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:25:26.392935  139582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:25:26.392946  139582 out.go:374] Setting ErrFile to fd 2...
	I1017 19:25:26.392951  139582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:25:26.393173  139582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:25:26.393648  139582 out.go:368] Setting JSON to true
	I1017 19:25:26.394637  139582 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4074,"bootTime":1760725052,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:25:26.394757  139582 start.go:141] virtualization: kvm guest
	I1017 19:25:26.396856  139582 out.go:99] [download-only-893455] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:25:26.397027  139582 notify.go:220] Checking for updates...
	I1017 19:25:26.398483  139582 out.go:171] MINIKUBE_LOCATION=21664
	I1017 19:25:26.399928  139582 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:25:26.401191  139582 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 19:25:26.402698  139582 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 19:25:26.404114  139582 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1017 19:25:26.406697  139582 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 19:25:26.406979  139582 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:25:26.429580  139582 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:25:26.429666  139582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:25:26.487562  139582 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-17 19:25:26.477719754 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:25:26.487685  139582 docker.go:318] overlay module found
	I1017 19:25:26.489720  139582 out.go:99] Using the docker driver based on user configuration
	I1017 19:25:26.489776  139582 start.go:305] selected driver: docker
	I1017 19:25:26.489785  139582 start.go:925] validating driver "docker" against <nil>
	I1017 19:25:26.489890  139582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:25:26.545011  139582 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-17 19:25:26.535084473 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:25:26.545237  139582 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:25:26.545965  139582 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1017 19:25:26.546165  139582 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 19:25:26.548413  139582 out.go:171] Using Docker driver with root privileges
	I1017 19:25:26.550136  139582 cni.go:84] Creating CNI manager for ""
	I1017 19:25:26.550203  139582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:25:26.550214  139582 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 19:25:26.550299  139582 start.go:349] cluster config:
	{Name:download-only-893455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-893455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:25:26.551886  139582 out.go:99] Starting "download-only-893455" primary control-plane node in "download-only-893455" cluster
	I1017 19:25:26.551911  139582 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:25:26.553394  139582 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:25:26.553423  139582 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:25:26.553488  139582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:25:26.570628  139582 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 19:25:26.570793  139582 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 19:25:26.570814  139582 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1017 19:25:26.570819  139582 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1017 19:25:26.570829  139582 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1017 19:25:26.573543  139582 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:25:26.573572  139582 cache.go:58] Caching tarball of preloaded images
	I1017 19:25:26.573757  139582 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:25:26.575983  139582 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1017 19:25:26.576014  139582 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1017 19:25:26.600483  139582 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1017 19:25:26.600535  139582 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:25:30.832468  139582 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:25:30.832875  139582 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/download-only-893455/config.json ...
	I1017 19:25:30.832914  139582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/download-only-893455/config.json: {Name:mk2f4ef827995bdb4af08442ddd0809b21a940fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:25:30.833091  139582 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:25:30.833241  139582 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21664-135723/.minikube/cache/bin/linux/amd64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-893455 host does not exist
	  To start a cluster, run: "minikube start -p download-only-893455"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-893455
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-414872 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-414872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-414872
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1017 19:25:32.769117  139217 binary.go:77] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-524976 --alsologtostderr --binary-mirror http://127.0.0.1:38925 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-524976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-524976
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (63.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-259515 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-259515 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m1.099116751s)
helpers_test.go:175: Cleaning up "offline-crio-259515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-259515
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-259515: (2.773138343s)
--- PASS: TestOffline (63.87s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-808548
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-808548: exit status 85 (55.826336ms)

                                                
                                                
-- stdout --
	* Profile "addons-808548" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-808548"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-808548
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-808548: exit status 85 (55.536376ms)

                                                
                                                
-- stdout --
	* Profile "addons-808548" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-808548"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (150.74s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-808548 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-808548 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m30.739022357s)
--- PASS: TestAddons/Setup (150.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-808548 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-808548 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-808548 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-808548 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0103b3dd-566b-45eb-803e-5794db655669] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0103b3dd-566b-45eb-803e-5794db655669] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004268037s
addons_test.go:694: (dbg) Run:  kubectl --context addons-808548 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-808548 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-808548 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.62s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-808548
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-808548: (18.34869698s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-808548
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-808548
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-808548
--- PASS: TestAddons/StoppedEnableDisable (18.62s)

                                                
                                    
x
+
TestCertOptions (28.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-318223 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-318223 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.228437711s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-318223 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-318223 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-318223 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-318223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-318223
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-318223: (2.503289301s)
--- PASS: TestCertOptions (28.47s)

                                                
                                    
x
+
TestCertExpiration (214.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-202048 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-202048 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.363572304s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-202048 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-202048 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.296465872s)
helpers_test.go:175: Cleaning up "cert-expiration-202048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-202048
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-202048: (2.679575073s)
--- PASS: TestCertExpiration (214.34s)

                                                
                                    
x
+
TestForceSystemdFlag (26.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-599050 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-599050 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.184884843s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-599050 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-599050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-599050
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-599050: (2.463064846s)
--- PASS: TestForceSystemdFlag (26.93s)

                                                
                                    
x
+
TestForceSystemdEnv (31.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-834947 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-834947 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.521391796s)
helpers_test.go:175: Cleaning up "force-systemd-env-834947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-834947
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-834947: (3.998638782s)
--- PASS: TestForceSystemdEnv (31.52s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.82s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1017 20:08:22.258931  139217 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1017 20:08:22.259093  139217 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3956803100/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1017 20:08:22.293685  139217 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3956803100/001/docker-machine-driver-kvm2 version is 1.1.1
W1017 20:08:22.293726  139217 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1017 20:08:22.293897  139217 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1017 20:08:22.293967  139217 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3956803100/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (0.82s)

                                                
                                    
x
+
TestErrorSpam/setup (25.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-512215 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-512215 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-512215 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-512215 --driver=docker  --container-runtime=crio: (25.006869115s)
--- PASS: TestErrorSpam/setup (25.01s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (6.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 pause: exit status 80 (2.286264174s)

                                                
                                                
-- stdout --
	* Pausing node nospam-512215 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:31:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 pause: exit status 80 (2.032314381s)

                                                
                                                
-- stdout --
	* Pausing node nospam-512215 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:31:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 pause: exit status 80 (2.381457478s)

                                                
                                                
-- stdout --
	* Pausing node nospam-512215 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:31:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 unpause: exit status 80 (1.979142873s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-512215 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:32:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 unpause: exit status 80 (2.278803595s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-512215 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:32:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 unpause: exit status 80 (1.495986736s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-512215 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:32:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.75s)

                                                
                                    
x
+
TestErrorSpam/stop (8.13s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 stop: (7.933555713s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-512215 --log_dir /tmp/nospam-512215 stop
--- PASS: TestErrorSpam/stop (8.13s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21664-135723/.minikube/files/etc/test/nested/copy/139217/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558322 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-558322 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.893912623s)
--- PASS: TestFunctional/serial/StartWithProxy (39.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.42s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1017 19:32:57.124540  139217 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558322 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-558322 --alsologtostderr -v=8: (6.422656698s)
functional_test.go:678: soft start took 6.423339091s for "functional-558322" cluster.
I1017 19:33:03.547567  139217 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.42s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-558322 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 cache add registry.k8s.io/pause:3.3
E1017 19:33:05.046969  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:33:05.053467  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:33:05.065004  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:33:05.086546  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:33:05.128102  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:33:05.209595  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:33:05.371206  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 cache add registry.k8s.io/pause:latest
E1017 19:33:05.693482  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:33:06.334920  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-558322 /tmp/TestFunctionalserialCacheCmdcacheadd_local3496830756/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 cache add minikube-local-cache-test:functional-558322
E1017 19:33:07.617191  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-558322 cache add minikube-local-cache-test:functional-558322: (1.281281217s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 cache delete minikube-local-cache-test:functional-558322
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-558322
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.910197ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
E1017 19:33:10.179009  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 kubectl -- --context functional-558322 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-558322 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (75.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558322 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1017 19:33:15.300237  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:33:25.541988  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:33:46.024065  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-558322 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m15.499977558s)
functional_test.go:776: restart took 1m15.500123911s for "functional-558322" cluster.
I1017 19:34:25.906580  139217 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (75.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-558322 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 logs
E1017 19:34:26.986302  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-558322 logs: (1.337994174s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 logs --file /tmp/TestFunctionalserialLogsFileCmd2811089781/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-558322 logs --file /tmp/TestFunctionalserialLogsFileCmd2811089781/001/logs.txt: (1.357355134s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-558322 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-558322
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-558322: exit status 115 (345.699465ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31721 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-558322 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 config get cpus: exit status 14 (52.276993ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 config get cpus: exit status 14 (50.052365ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-558322 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-558322 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 179170: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558322 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-558322 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (162.119638ms)

                                                
                                                
-- stdout --
	* [functional-558322] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:34:57.478380  178771 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:34:57.478619  178771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:34:57.478628  178771 out.go:374] Setting ErrFile to fd 2...
	I1017 19:34:57.478632  178771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:34:57.478878  178771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:34:57.479331  178771 out.go:368] Setting JSON to false
	I1017 19:34:57.480335  178771 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4645,"bootTime":1760725052,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:34:57.480438  178771 start.go:141] virtualization: kvm guest
	I1017 19:34:57.482536  178771 out.go:179] * [functional-558322] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:34:57.484099  178771 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 19:34:57.484101  178771 notify.go:220] Checking for updates...
	I1017 19:34:57.487571  178771 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:34:57.488993  178771 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 19:34:57.490332  178771 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 19:34:57.491555  178771 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:34:57.492825  178771 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:34:57.494603  178771 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:34:57.495144  178771 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:34:57.519406  178771 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:34:57.519504  178771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:34:57.579593  178771 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-17 19:34:57.570176612 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:34:57.579698  178771 docker.go:318] overlay module found
	I1017 19:34:57.581777  178771 out.go:179] * Using the docker driver based on existing profile
	I1017 19:34:57.583372  178771 start.go:305] selected driver: docker
	I1017 19:34:57.583393  178771 start.go:925] validating driver "docker" against &{Name:functional-558322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558322 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:34:57.583490  178771 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:34:57.585296  178771 out.go:203] 
	W1017 19:34:57.586866  178771 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1017 19:34:57.588336  178771 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558322 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558322 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-558322 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (160.649339ms)

                                                
                                                
-- stdout --
	* [functional-558322] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:34:56.364308  178342 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:34:56.364409  178342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:34:56.364414  178342 out.go:374] Setting ErrFile to fd 2...
	I1017 19:34:56.364420  178342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:34:56.364775  178342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:34:56.365239  178342 out.go:368] Setting JSON to false
	I1017 19:34:56.366356  178342 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4644,"bootTime":1760725052,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:34:56.366465  178342 start.go:141] virtualization: kvm guest
	I1017 19:34:56.368645  178342 out.go:179] * [functional-558322] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1017 19:34:56.370301  178342 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 19:34:56.370292  178342 notify.go:220] Checking for updates...
	I1017 19:34:56.371814  178342 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:34:56.373371  178342 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 19:34:56.374907  178342 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 19:34:56.376419  178342 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:34:56.377852  178342 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:34:56.379986  178342 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:34:56.380470  178342 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:34:56.404997  178342 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 19:34:56.405110  178342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:34:56.461945  178342 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-17 19:34:56.451465497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:34:56.462056  178342 docker.go:318] overlay module found
	I1017 19:34:56.464094  178342 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1017 19:34:56.466173  178342 start.go:305] selected driver: docker
	I1017 19:34:56.466191  178342 start.go:925] validating driver "docker" against &{Name:functional-558322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558322 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:34:56.466305  178342 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:34:56.468393  178342 out.go:203] 
	W1017 19:34:56.469869  178342 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1017 19:34:56.471190  178342 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f280373b-81fb-48c0-9e3e-58bf59ef7927] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003695668s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-558322 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-558322 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-558322 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-558322 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a382abfd-4033-4214-b69b-e6b949a6460f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a382abfd-4033-4214-b69b-e6b949a6460f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004060373s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-558322 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-558322 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-558322 apply -f testdata/storage-provisioner/pod.yaml
I1017 19:34:54.116809  139217 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d1641993-92d1-4767-bd44-858c9859132b] Pending
helpers_test.go:352: "sp-pod" [d1641993-92d1-4767-bd44-858c9859132b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d1641993-92d1-4767-bd44-858c9859132b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004034106s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-558322 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh -n functional-558322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 cp functional-558322:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2836620266/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh -n functional-558322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh -n functional-558322 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-558322 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-kbvc9" [cb5d1f84-8465-4c2f-b784-1bbc611e18a3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-kbvc9" [cb5d1f84-8465-4c2f-b784-1bbc611e18a3] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.003557974s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-558322 exec mysql-5bb876957f-kbvc9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-558322 exec mysql-5bb876957f-kbvc9 -- mysql -ppassword -e "show databases;": exit status 1 (90.216372ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1017 19:34:48.264650  139217 retry.go:31] will retry after 1.397934904s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-558322 exec mysql-5bb876957f-kbvc9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-558322 exec mysql-5bb876957f-kbvc9 -- mysql -ppassword -e "show databases;": exit status 1 (97.285903ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1017 19:34:49.760261  139217 retry.go:31] will retry after 1.282251225s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-558322 exec mysql-5bb876957f-kbvc9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (17.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/139217/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "sudo cat /etc/test/nested/copy/139217/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/139217.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "sudo cat /etc/ssl/certs/139217.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/139217.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "sudo cat /usr/share/ca-certificates/139217.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1392172.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "sudo cat /etc/ssl/certs/1392172.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1392172.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "sudo cat /usr/share/ca-certificates/1392172.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-558322 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 ssh "sudo systemctl is-active docker": exit status 1 (293.944683ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 ssh "sudo systemctl is-active containerd": exit status 1 (304.190813ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558322 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558322 image ls --format short --alsologtostderr:
I1017 19:35:03.389458  179715 out.go:360] Setting OutFile to fd 1 ...
I1017 19:35:03.389580  179715 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:03.389588  179715 out.go:374] Setting ErrFile to fd 2...
I1017 19:35:03.389592  179715 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:03.389891  179715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
I1017 19:35:03.390627  179715 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:03.390790  179715 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:03.391349  179715 cli_runner.go:164] Run: docker container inspect functional-558322 --format={{.State.Status}}
I1017 19:35:03.412901  179715 ssh_runner.go:195] Run: systemctl --version
I1017 19:35:03.412955  179715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558322
I1017 19:35:03.435967  179715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32899 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/functional-558322/id_rsa Username:docker}
I1017 19:35:03.544582  179715 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558322 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558322 image ls --format table --alsologtostderr:
I1017 19:35:06.218194  180121 out.go:360] Setting OutFile to fd 1 ...
I1017 19:35:06.218453  180121 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:06.218462  180121 out.go:374] Setting ErrFile to fd 2...
I1017 19:35:06.218465  180121 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:06.218697  180121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
I1017 19:35:06.219316  180121 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:06.219407  180121 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:06.219792  180121 cli_runner.go:164] Run: docker container inspect functional-558322 --format={{.State.Status}}
I1017 19:35:06.240072  180121 ssh_runner.go:195] Run: systemctl --version
I1017 19:35:06.240131  180121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558322
I1017 19:35:06.258772  180121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32899 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/functional-558322/id_rsa Username:docker}
I1017 19:35:06.355766  180121 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558322 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-
minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"314705
24"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca
9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e
3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07ccdb7838758e758a4d52a9761636c3851
25a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1
.34.1"],"size":"76004181"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558322 image ls --format json --alsologtostderr:
I1017 19:35:06.002803  180052 out.go:360] Setting OutFile to fd 1 ...
I1017 19:35:06.002924  180052 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:06.002936  180052 out.go:374] Setting ErrFile to fd 2...
I1017 19:35:06.002941  180052 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:06.003178  180052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
I1017 19:35:06.003857  180052 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:06.003994  180052 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:06.004433  180052 cli_runner.go:164] Run: docker container inspect functional-558322 --format={{.State.Status}}
I1017 19:35:06.022148  180052 ssh_runner.go:195] Run: systemctl --version
I1017 19:35:06.022205  180052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558322
I1017 19:35:06.042049  180052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32899 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/functional-558322/id_rsa Username:docker}
I1017 19:35:06.138898  180052 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-558322 image ls --format yaml --alsologtostderr: (1.283781289s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558322 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558322 image ls --format yaml --alsologtostderr:
I1017 19:35:03.653947  179770 out.go:360] Setting OutFile to fd 1 ...
I1017 19:35:03.654081  179770 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:03.654093  179770 out.go:374] Setting ErrFile to fd 2...
I1017 19:35:03.654099  179770 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:03.654378  179770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
I1017 19:35:03.655227  179770 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:03.655361  179770 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:03.655904  179770 cli_runner.go:164] Run: docker container inspect functional-558322 --format={{.State.Status}}
I1017 19:35:03.678245  179770 ssh_runner.go:195] Run: systemctl --version
I1017 19:35:03.678308  179770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558322
I1017 19:35:03.701116  179770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32899 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/functional-558322/id_rsa Username:docker}
I1017 19:35:03.803961  179770 ssh_runner.go:195] Run: sudo crictl images --output json
I1017 19:35:04.869984  179770 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.065987473s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 ssh pgrep buildkitd: exit status 1 (281.404698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image build -t localhost/my-image:functional-558322 testdata/build --alsologtostderr
2025/10/17 19:35:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-558322 image build -t localhost/my-image:functional-558322 testdata/build --alsologtostderr: (2.901171828s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558322 image build -t localhost/my-image:functional-558322 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 615875eb753
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-558322
--> 4e56d6b38b2
Successfully tagged localhost/my-image:functional-558322
4e56d6b38b242f85733e8a3d7c20a00dc2d1541aee2f428669fa8419aadad643
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558322 image build -t localhost/my-image:functional-558322 testdata/build --alsologtostderr:
I1017 19:35:05.202848  179981 out.go:360] Setting OutFile to fd 1 ...
I1017 19:35:05.203157  179981 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:05.203169  179981 out.go:374] Setting ErrFile to fd 2...
I1017 19:35:05.203176  179981 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:05.203370  179981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
I1017 19:35:05.203985  179981 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:05.204710  179981 config.go:182] Loaded profile config "functional-558322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:05.205142  179981 cli_runner.go:164] Run: docker container inspect functional-558322 --format={{.State.Status}}
I1017 19:35:05.222249  179981 ssh_runner.go:195] Run: systemctl --version
I1017 19:35:05.222320  179981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558322
I1017 19:35:05.239809  179981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32899 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/functional-558322/id_rsa Username:docker}
I1017 19:35:05.334484  179981 build_images.go:161] Building image from path: /tmp/build.282457095.tar
I1017 19:35:05.334557  179981 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1017 19:35:05.343077  179981 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.282457095.tar
I1017 19:35:05.346994  179981 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.282457095.tar: stat -c "%s %y" /var/lib/minikube/build/build.282457095.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.282457095.tar': No such file or directory
I1017 19:35:05.347021  179981 ssh_runner.go:362] scp /tmp/build.282457095.tar --> /var/lib/minikube/build/build.282457095.tar (3072 bytes)
I1017 19:35:05.365988  179981 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.282457095
I1017 19:35:05.374414  179981 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.282457095 -xf /var/lib/minikube/build/build.282457095.tar
I1017 19:35:05.382926  179981 crio.go:315] Building image: /var/lib/minikube/build/build.282457095
I1017 19:35:05.382984  179981 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-558322 /var/lib/minikube/build/build.282457095 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1017 19:35:08.034727  179981 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-558322 /var/lib/minikube/build/build.282457095 --cgroup-manager=cgroupfs: (2.651719382s)
I1017 19:35:08.034854  179981 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.282457095
I1017 19:35:08.043720  179981 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.282457095.tar
I1017 19:35:08.051792  179981 build_images.go:217] Built localhost/my-image:functional-558322 from /tmp/build.282457095.tar
I1017 19:35:08.051831  179981 build_images.go:133] succeeded building to: functional-558322
I1017 19:35:08.051837  179981 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image ls
E1017 19:35:48.908635  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:38:05.046928  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:38:32.750965  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:43:05.046527  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.589633108s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-558322
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-558322 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-558322 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-558322 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-558322 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 173136: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-558322 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-558322 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d141c487-5f85-478c-8024-d10a9f39febd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d141c487-5f85-478c-8024-d10a9f39febd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.004486312s
I1017 19:34:49.714693  139217 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image rm kicbase/echo-server:functional-558322 --alsologtostderr
I1017 19:34:41.218389  139217 detect.go:223] nested VM detected
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558322 /tmp/TestFunctionalparallelMountCmdany-port4251462521/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760729682320620295" to /tmp/TestFunctionalparallelMountCmdany-port4251462521/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760729682320620295" to /tmp/TestFunctionalparallelMountCmdany-port4251462521/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760729682320620295" to /tmp/TestFunctionalparallelMountCmdany-port4251462521/001/test-1760729682320620295
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (336.123848ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:34:42.657119  139217 retry.go:31] will retry after 515.239656ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 17 19:34 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 17 19:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 17 19:34 test-1760729682320620295
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh cat /mount-9p/test-1760729682320620295
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-558322 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [cf9bd1c4-5c57-4ddf-ad9c-95a1107b14ea] Pending
helpers_test.go:352: "busybox-mount" [cf9bd1c4-5c57-4ddf-ad9c-95a1107b14ea] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [cf9bd1c4-5c57-4ddf-ad9c-95a1107b14ea] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [cf9bd1c4-5c57-4ddf-ad9c-95a1107b14ea] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.002999418s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-558322 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558322 /tmp/TestFunctionalparallelMountCmdany-port4251462521/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-558322 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.74.132 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-558322 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558322 /tmp/TestFunctionalparallelMountCmdspecific-port515470538/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.82097ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:34:51.531882  139217 retry.go:31] will retry after 714.991056ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558322 /tmp/TestFunctionalparallelMountCmdspecific-port515470538/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 ssh "sudo umount -f /mount-9p": exit status 1 (272.155373ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-558322 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558322 /tmp/TestFunctionalparallelMountCmdspecific-port515470538/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3858166900/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3858166900/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3858166900/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558322 ssh "findmnt -T" /mount1: exit status 1 (369.199706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:34:53.641209  139217 retry.go:31] will retry after 561.673079ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-558322 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3858166900/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3858166900/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3858166900/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "348.259589ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "52.468465ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "340.637646ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "52.093311ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-558322 service list: (1.756082792s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-558322 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-558322 service list -o json: (1.705814136s)
functional_test.go:1504: Took "1.705947189s" to run "out/minikube-linux-amd64 -p functional-558322 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-558322
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-558322
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-558322
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (141.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-820501 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m21.196507223s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (141.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-820501 kubectl -- rollout status deployment/busybox: (3.016533164s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-965qq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-n8mg4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-z9sq6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-965qq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-n8mg4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-z9sq6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-965qq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-n8mg4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-z9sq6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-965qq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-965qq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-n8mg4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-n8mg4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-z9sq6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 kubectl -- exec busybox-7b57f96db7-z9sq6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-820501 node add --alsologtostderr -v 5: (23.443319864s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-820501 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp testdata/cp-test.txt ha-820501:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4252047986/001/cp-test_ha-820501.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501:/home/docker/cp-test.txt ha-820501-m02:/home/docker/cp-test_ha-820501_ha-820501-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m02 "sudo cat /home/docker/cp-test_ha-820501_ha-820501-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501:/home/docker/cp-test.txt ha-820501-m03:/home/docker/cp-test_ha-820501_ha-820501-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m03 "sudo cat /home/docker/cp-test_ha-820501_ha-820501-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501:/home/docker/cp-test.txt ha-820501-m04:/home/docker/cp-test_ha-820501_ha-820501-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m04 "sudo cat /home/docker/cp-test_ha-820501_ha-820501-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp testdata/cp-test.txt ha-820501-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4252047986/001/cp-test_ha-820501-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m02:/home/docker/cp-test.txt ha-820501:/home/docker/cp-test_ha-820501-m02_ha-820501.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501 "sudo cat /home/docker/cp-test_ha-820501-m02_ha-820501.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m02:/home/docker/cp-test.txt ha-820501-m03:/home/docker/cp-test_ha-820501-m02_ha-820501-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m03 "sudo cat /home/docker/cp-test_ha-820501-m02_ha-820501-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m02:/home/docker/cp-test.txt ha-820501-m04:/home/docker/cp-test_ha-820501-m02_ha-820501-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m04 "sudo cat /home/docker/cp-test_ha-820501-m02_ha-820501-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp testdata/cp-test.txt ha-820501-m03:/home/docker/cp-test.txt
E1017 19:48:05.046351  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4252047986/001/cp-test_ha-820501-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m03:/home/docker/cp-test.txt ha-820501:/home/docker/cp-test_ha-820501-m03_ha-820501.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501 "sudo cat /home/docker/cp-test_ha-820501-m03_ha-820501.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m03:/home/docker/cp-test.txt ha-820501-m02:/home/docker/cp-test_ha-820501-m03_ha-820501-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m02 "sudo cat /home/docker/cp-test_ha-820501-m03_ha-820501-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m03:/home/docker/cp-test.txt ha-820501-m04:/home/docker/cp-test_ha-820501-m03_ha-820501-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m04 "sudo cat /home/docker/cp-test_ha-820501-m03_ha-820501-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp testdata/cp-test.txt ha-820501-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4252047986/001/cp-test_ha-820501-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m04:/home/docker/cp-test.txt ha-820501:/home/docker/cp-test_ha-820501-m04_ha-820501.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501 "sudo cat /home/docker/cp-test_ha-820501-m04_ha-820501.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m04:/home/docker/cp-test.txt ha-820501-m02:/home/docker/cp-test_ha-820501-m04_ha-820501-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m02 "sudo cat /home/docker/cp-test_ha-820501-m04_ha-820501-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 cp ha-820501-m04:/home/docker/cp-test.txt ha-820501-m03:/home/docker/cp-test_ha-820501-m04_ha-820501-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 ssh -n ha-820501-m03 "sudo cat /home/docker/cp-test_ha-820501-m04_ha-820501-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-820501 node stop m02 --alsologtostderr -v 5: (19.117768786s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-820501 status --alsologtostderr -v 5: exit status 7 (704.026206ms)

                                                
                                                
-- stdout --
	ha-820501
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-820501-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820501-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-820501-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:48:32.007295  204287 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:48:32.007557  204287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:48:32.007567  204287 out.go:374] Setting ErrFile to fd 2...
	I1017 19:48:32.007571  204287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:48:32.007735  204287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:48:32.007922  204287 out.go:368] Setting JSON to false
	I1017 19:48:32.007950  204287 mustload.go:65] Loading cluster: ha-820501
	I1017 19:48:32.008023  204287 notify.go:220] Checking for updates...
	I1017 19:48:32.008420  204287 config.go:182] Loaded profile config "ha-820501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:48:32.008441  204287 status.go:174] checking status of ha-820501 ...
	I1017 19:48:32.008989  204287 cli_runner.go:164] Run: docker container inspect ha-820501 --format={{.State.Status}}
	I1017 19:48:32.029782  204287 status.go:371] ha-820501 host status = "Running" (err=<nil>)
	I1017 19:48:32.029811  204287 host.go:66] Checking if "ha-820501" exists ...
	I1017 19:48:32.030175  204287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-820501
	I1017 19:48:32.049873  204287 host.go:66] Checking if "ha-820501" exists ...
	I1017 19:48:32.050400  204287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:48:32.050471  204287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-820501
	I1017 19:48:32.069691  204287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/ha-820501/id_rsa Username:docker}
	I1017 19:48:32.165153  204287 ssh_runner.go:195] Run: systemctl --version
	I1017 19:48:32.171915  204287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:48:32.185024  204287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:48:32.244600  204287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-17 19:48:32.234650203 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 19:48:32.245196  204287 kubeconfig.go:125] found "ha-820501" server: "https://192.168.49.254:8443"
	I1017 19:48:32.245228  204287 api_server.go:166] Checking apiserver status ...
	I1017 19:48:32.245276  204287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:48:32.257730  204287 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1272/cgroup
	W1017 19:48:32.267163  204287 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1272/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:48:32.267223  204287 ssh_runner.go:195] Run: ls
	I1017 19:48:32.271846  204287 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1017 19:48:32.276137  204287 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1017 19:48:32.276164  204287 status.go:463] ha-820501 apiserver status = Running (err=<nil>)
	I1017 19:48:32.276173  204287 status.go:176] ha-820501 status: &{Name:ha-820501 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:48:32.276189  204287 status.go:174] checking status of ha-820501-m02 ...
	I1017 19:48:32.276459  204287 cli_runner.go:164] Run: docker container inspect ha-820501-m02 --format={{.State.Status}}
	I1017 19:48:32.294683  204287 status.go:371] ha-820501-m02 host status = "Stopped" (err=<nil>)
	I1017 19:48:32.294705  204287 status.go:384] host is not running, skipping remaining checks
	I1017 19:48:32.294712  204287 status.go:176] ha-820501-m02 status: &{Name:ha-820501-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:48:32.294731  204287 status.go:174] checking status of ha-820501-m03 ...
	I1017 19:48:32.295044  204287 cli_runner.go:164] Run: docker container inspect ha-820501-m03 --format={{.State.Status}}
	I1017 19:48:32.314217  204287 status.go:371] ha-820501-m03 host status = "Running" (err=<nil>)
	I1017 19:48:32.314243  204287 host.go:66] Checking if "ha-820501-m03" exists ...
	I1017 19:48:32.314518  204287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-820501-m03
	I1017 19:48:32.334193  204287 host.go:66] Checking if "ha-820501-m03" exists ...
	I1017 19:48:32.334488  204287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:48:32.334584  204287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-820501-m03
	I1017 19:48:32.354024  204287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/ha-820501-m03/id_rsa Username:docker}
	I1017 19:48:32.449344  204287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:48:32.463105  204287 kubeconfig.go:125] found "ha-820501" server: "https://192.168.49.254:8443"
	I1017 19:48:32.463135  204287 api_server.go:166] Checking apiserver status ...
	I1017 19:48:32.463167  204287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:48:32.474731  204287 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W1017 19:48:32.483933  204287 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:48:32.483995  204287 ssh_runner.go:195] Run: ls
	I1017 19:48:32.488396  204287 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1017 19:48:32.493550  204287 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1017 19:48:32.493575  204287 status.go:463] ha-820501-m03 apiserver status = Running (err=<nil>)
	I1017 19:48:32.493583  204287 status.go:176] ha-820501-m03 status: &{Name:ha-820501-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:48:32.493603  204287 status.go:174] checking status of ha-820501-m04 ...
	I1017 19:48:32.493944  204287 cli_runner.go:164] Run: docker container inspect ha-820501-m04 --format={{.State.Status}}
	I1017 19:48:32.511958  204287 status.go:371] ha-820501-m04 host status = "Running" (err=<nil>)
	I1017 19:48:32.511987  204287 host.go:66] Checking if "ha-820501-m04" exists ...
	I1017 19:48:32.512251  204287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-820501-m04
	I1017 19:48:32.530750  204287 host.go:66] Checking if "ha-820501-m04" exists ...
	I1017 19:48:32.531022  204287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:48:32.531071  204287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-820501-m04
	I1017 19:48:32.549014  204287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32919 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/ha-820501-m04/id_rsa Username:docker}
	I1017 19:48:32.647122  204287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:48:32.660524  204287 status.go:176] ha-820501-m04 status: &{Name:ha-820501-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-820501 node start m02 --alsologtostderr -v 5: (8.132448419s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 stop --alsologtostderr -v 5
E1017 19:49:28.114893  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-820501 stop --alsologtostderr -v 5: (48.783535041s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 start --wait true --alsologtostderr -v 5
E1017 19:49:34.171819  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:34.178344  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:34.189851  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:34.211473  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:34.252954  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:34.334548  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:34.496300  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:34.818007  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:35.460349  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:36.742457  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:39.304538  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:44.426823  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:49:54.668675  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:50:15.150522  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-820501 start --wait true --alsologtostderr -v 5: (58.662934016s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-820501 node delete m03 --alsologtostderr -v 5: (9.79487272s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (46.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 stop --alsologtostderr -v 5
E1017 19:50:56.112770  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-820501 stop --alsologtostderr -v 5: (46.490444949s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-820501 status --alsologtostderr -v 5: exit status 7 (112.39625ms)

                                                
                                                
-- stdout --
	ha-820501
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820501-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820501-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:51:28.825316  218251 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:51:28.825570  218251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:51:28.825578  218251 out.go:374] Setting ErrFile to fd 2...
	I1017 19:51:28.825582  218251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:51:28.825817  218251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 19:51:28.826031  218251 out.go:368] Setting JSON to false
	I1017 19:51:28.826061  218251 mustload.go:65] Loading cluster: ha-820501
	I1017 19:51:28.826205  218251 notify.go:220] Checking for updates...
	I1017 19:51:28.826432  218251 config.go:182] Loaded profile config "ha-820501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:51:28.826445  218251 status.go:174] checking status of ha-820501 ...
	I1017 19:51:28.827086  218251 cli_runner.go:164] Run: docker container inspect ha-820501 --format={{.State.Status}}
	I1017 19:51:28.845988  218251 status.go:371] ha-820501 host status = "Stopped" (err=<nil>)
	I1017 19:51:28.846012  218251 status.go:384] host is not running, skipping remaining checks
	I1017 19:51:28.846018  218251 status.go:176] ha-820501 status: &{Name:ha-820501 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:51:28.846041  218251 status.go:174] checking status of ha-820501-m02 ...
	I1017 19:51:28.846336  218251 cli_runner.go:164] Run: docker container inspect ha-820501-m02 --format={{.State.Status}}
	I1017 19:51:28.865617  218251 status.go:371] ha-820501-m02 host status = "Stopped" (err=<nil>)
	I1017 19:51:28.865664  218251 status.go:384] host is not running, skipping remaining checks
	I1017 19:51:28.865673  218251 status.go:176] ha-820501-m02 status: &{Name:ha-820501-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:51:28.865698  218251 status.go:174] checking status of ha-820501-m04 ...
	I1017 19:51:28.866011  218251 cli_runner.go:164] Run: docker container inspect ha-820501-m04 --format={{.State.Status}}
	I1017 19:51:28.884432  218251 status.go:371] ha-820501-m04 host status = "Stopped" (err=<nil>)
	I1017 19:51:28.884460  218251 status.go:384] host is not running, skipping remaining checks
	I1017 19:51:28.884469  218251 status.go:176] ha-820501-m04 status: &{Name:ha-820501-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (46.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1017 19:52:18.035122  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-820501 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (53.394877996s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (34.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-820501 node add --control-plane --alsologtostderr -v 5: (33.442980828s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-820501 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (34.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-415849 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-415849 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (39.348362019s)
--- PASS: TestJSONOutput/start/Command (39.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-415849 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-415849 --output=json --user=testUser: (8.003621387s)
--- PASS: TestJSONOutput/stop/Command (8.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-251837 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-251837 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.656569ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8194b7bf-378a-4f5f-abba-9dbd28455dd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-251837] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a35650b6-fe21-47d6-bb66-5beb836adefd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21664"}}
	{"specversion":"1.0","id":"862f08ca-3ced-4ac5-9ab8-fa1cccaa335b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"48b4fd0c-9654-4531-b578-9f81d80ab921","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig"}}
	{"specversion":"1.0","id":"608e7103-0ed6-4756-961c-ace1c7d32c31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube"}}
	{"specversion":"1.0","id":"f4ac67c2-28e4-422e-8e24-cd151192cfec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d88a2a23-9d2c-42c7-a767-84ce1b8597b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"35ff696e-6e41-4d01-96fd-cff17ed0030b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-251837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-251837
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-703408 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-703408 --network=: (26.525791387s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-703408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-703408
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-703408: (2.153827005s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-845835 --network=bridge
E1017 19:54:34.170990  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-845835 --network=bridge: (22.069573057s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-845835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-845835
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-845835: (2.056453275s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.15s)

                                                
                                    
x
+
TestKicExistingNetwork (25.83s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1017 19:54:57.023644  139217 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1017 19:54:57.041278  139217 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1017 19:54:57.041379  139217 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1017 19:54:57.041404  139217 cli_runner.go:164] Run: docker network inspect existing-network
W1017 19:54:57.059090  139217 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1017 19:54:57.059120  139217 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1017 19:54:57.059134  139217 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1017 19:54:57.059263  139217 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1017 19:54:57.078237  139217 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d34a70da1174 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:b8:c9:c3:2e:b0} reservation:<nil>}
I1017 19:54:57.078712  139217 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003df670}
I1017 19:54:57.078754  139217 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1017 19:54:57.078815  139217 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1017 19:54:57.138593  139217 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-619459 --network=existing-network
E1017 19:55:01.879934  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-619459 --network=existing-network: (23.672217675s)
helpers_test.go:175: Cleaning up "existing-network-619459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-619459
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-619459: (2.008398768s)
I1017 19:55:22.838231  139217 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.83s)

                                                
                                    
x
+
TestKicCustomSubnet (24.79s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-028767 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-028767 --subnet=192.168.60.0/24: (22.526840467s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-028767 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-028767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-028767
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-028767: (2.245720893s)
--- PASS: TestKicCustomSubnet (24.79s)

                                                
                                    
x
+
TestKicStaticIP (27.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-310503 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-310503 --static-ip=192.168.200.200: (25.229354759s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-310503 ip
helpers_test.go:175: Cleaning up "static-ip-310503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-310503
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-310503: (2.175091621s)
--- PASS: TestKicStaticIP (27.55s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-239247 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-239247 --driver=docker  --container-runtime=crio: (21.42308316s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-242095 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-242095 --driver=docker  --container-runtime=crio: (20.925079087s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-239247
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-242095
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-242095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-242095
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-242095: (2.401156622s)
helpers_test.go:175: Cleaning up "first-239247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-239247
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-239247: (2.436841827s)
--- PASS: TestMinikubeProfile (48.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-785552 --memory=3072 --mount-string /tmp/TestMountStartserial1438053815/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-785552 --memory=3072 --mount-string /tmp/TestMountStartserial1438053815/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.887728607s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-785552 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-802802 --memory=3072 --mount-string /tmp/TestMountStartserial1438053815/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-802802 --memory=3072 --mount-string /tmp/TestMountStartserial1438053815/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.46025151s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-802802 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-785552 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-785552 --alsologtostderr -v=5: (1.710983944s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-802802 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-802802
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-802802: (1.253652583s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-802802
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-802802: (6.266170585s)
--- PASS: TestMountStart/serial/RestartStopped (7.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-802802 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (91.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-813597 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1017 19:58:05.047049  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-813597 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m31.495412307s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (91.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-813597 -- rollout status deployment/busybox: (3.012966449s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- exec busybox-7b57f96db7-jvf9m -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- exec busybox-7b57f96db7-z67wh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- exec busybox-7b57f96db7-jvf9m -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- exec busybox-7b57f96db7-z67wh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- exec busybox-7b57f96db7-jvf9m -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- exec busybox-7b57f96db7-z67wh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- exec busybox-7b57f96db7-jvf9m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- exec busybox-7b57f96db7-jvf9m -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- exec busybox-7b57f96db7-z67wh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-813597 -- exec busybox-7b57f96db7-z67wh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-813597 -v=5 --alsologtostderr
E1017 19:59:34.170772  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-813597 -v=5 --alsologtostderr: (53.414349846s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-813597 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp testdata/cp-test.txt multinode-813597:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp multinode-813597:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3286808315/001/cp-test_multinode-813597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp multinode-813597:/home/docker/cp-test.txt multinode-813597-m02:/home/docker/cp-test_multinode-813597_multinode-813597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m02 "sudo cat /home/docker/cp-test_multinode-813597_multinode-813597-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp multinode-813597:/home/docker/cp-test.txt multinode-813597-m03:/home/docker/cp-test_multinode-813597_multinode-813597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m03 "sudo cat /home/docker/cp-test_multinode-813597_multinode-813597-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp testdata/cp-test.txt multinode-813597-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp multinode-813597-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3286808315/001/cp-test_multinode-813597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp multinode-813597-m02:/home/docker/cp-test.txt multinode-813597:/home/docker/cp-test_multinode-813597-m02_multinode-813597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597 "sudo cat /home/docker/cp-test_multinode-813597-m02_multinode-813597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp multinode-813597-m02:/home/docker/cp-test.txt multinode-813597-m03:/home/docker/cp-test_multinode-813597-m02_multinode-813597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m03 "sudo cat /home/docker/cp-test_multinode-813597-m02_multinode-813597-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp testdata/cp-test.txt multinode-813597-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp multinode-813597-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3286808315/001/cp-test_multinode-813597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp multinode-813597-m03:/home/docker/cp-test.txt multinode-813597:/home/docker/cp-test_multinode-813597-m03_multinode-813597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597 "sudo cat /home/docker/cp-test_multinode-813597-m03_multinode-813597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 cp multinode-813597-m03:/home/docker/cp-test.txt multinode-813597-m02:/home/docker/cp-test_multinode-813597-m03_multinode-813597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 ssh -n multinode-813597-m02 "sudo cat /home/docker/cp-test_multinode-813597-m03_multinode-813597-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-813597 node stop m03: (1.274461329s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-813597 status: exit status 7 (515.829237ms)

                                                
                                                
-- stdout --
	multinode-813597
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-813597-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-813597-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-813597 status --alsologtostderr: exit status 7 (505.754992ms)

                                                
                                                
-- stdout --
	multinode-813597
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-813597-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-813597-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:00:11.814241  277823 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:00:11.814575  277823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:11.814588  277823 out.go:374] Setting ErrFile to fd 2...
	I1017 20:00:11.814595  277823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:11.815424  277823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:00:11.815994  277823 out.go:368] Setting JSON to false
	I1017 20:00:11.816026  277823 mustload.go:65] Loading cluster: multinode-813597
	I1017 20:00:11.816168  277823 notify.go:220] Checking for updates...
	I1017 20:00:11.816437  277823 config.go:182] Loaded profile config "multinode-813597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:11.816454  277823 status.go:174] checking status of multinode-813597 ...
	I1017 20:00:11.816852  277823 cli_runner.go:164] Run: docker container inspect multinode-813597 --format={{.State.Status}}
	I1017 20:00:11.836766  277823 status.go:371] multinode-813597 host status = "Running" (err=<nil>)
	I1017 20:00:11.836794  277823 host.go:66] Checking if "multinode-813597" exists ...
	I1017 20:00:11.837085  277823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-813597
	I1017 20:00:11.856525  277823 host.go:66] Checking if "multinode-813597" exists ...
	I1017 20:00:11.856873  277823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:00:11.856931  277823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813597
	I1017 20:00:11.875791  277823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33024 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/multinode-813597/id_rsa Username:docker}
	I1017 20:00:11.971565  277823 ssh_runner.go:195] Run: systemctl --version
	I1017 20:00:11.978411  277823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:00:11.991888  277823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:00:12.053149  277823 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-17 20:00:12.041910065 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:00:12.053998  277823 kubeconfig.go:125] found "multinode-813597" server: "https://192.168.67.2:8443"
	I1017 20:00:12.054037  277823 api_server.go:166] Checking apiserver status ...
	I1017 20:00:12.054089  277823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:12.066457  277823 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	W1017 20:00:12.075797  277823 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:00:12.075899  277823 ssh_runner.go:195] Run: ls
	I1017 20:00:12.080374  277823 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1017 20:00:12.084634  277823 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1017 20:00:12.084662  277823 status.go:463] multinode-813597 apiserver status = Running (err=<nil>)
	I1017 20:00:12.084672  277823 status.go:176] multinode-813597 status: &{Name:multinode-813597 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:00:12.084687  277823 status.go:174] checking status of multinode-813597-m02 ...
	I1017 20:00:12.085025  277823 cli_runner.go:164] Run: docker container inspect multinode-813597-m02 --format={{.State.Status}}
	I1017 20:00:12.103530  277823 status.go:371] multinode-813597-m02 host status = "Running" (err=<nil>)
	I1017 20:00:12.103557  277823 host.go:66] Checking if "multinode-813597-m02" exists ...
	I1017 20:00:12.103861  277823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-813597-m02
	I1017 20:00:12.122296  277823 host.go:66] Checking if "multinode-813597-m02" exists ...
	I1017 20:00:12.122556  277823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:00:12.122600  277823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-813597-m02
	I1017 20:00:12.141415  277823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33029 SSHKeyPath:/home/jenkins/minikube-integration/21664-135723/.minikube/machines/multinode-813597-m02/id_rsa Username:docker}
	I1017 20:00:12.236371  277823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:00:12.249879  277823 status.go:176] multinode-813597-m02 status: &{Name:multinode-813597-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:00:12.249932  277823 status.go:174] checking status of multinode-813597-m03 ...
	I1017 20:00:12.250267  277823 cli_runner.go:164] Run: docker container inspect multinode-813597-m03 --format={{.State.Status}}
	I1017 20:00:12.269241  277823 status.go:371] multinode-813597-m03 host status = "Stopped" (err=<nil>)
	I1017 20:00:12.269264  277823 status.go:384] host is not running, skipping remaining checks
	I1017 20:00:12.269271  277823 status.go:176] multinode-813597-m03 status: &{Name:multinode-813597-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-813597 node start m03 -v=5 --alsologtostderr: (6.680501658s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-813597
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-813597
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-813597: (29.539217487s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-813597 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-813597 --wait=true -v=5 --alsologtostderr: (50.678086086s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-813597
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-813597 node delete m03: (4.680601474s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-813597 stop: (28.44566506s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-813597 status: exit status 7 (93.220193ms)

                                                
                                                
-- stdout --
	multinode-813597
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-813597-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-813597 status --alsologtostderr: exit status 7 (89.070555ms)

                                                
                                                
-- stdout --
	multinode-813597
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-813597-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:02:13.881403  287441 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:02:13.881531  287441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:02:13.881542  287441 out.go:374] Setting ErrFile to fd 2...
	I1017 20:02:13.881548  287441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:02:13.881780  287441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:02:13.881984  287441 out.go:368] Setting JSON to false
	I1017 20:02:13.882020  287441 mustload.go:65] Loading cluster: multinode-813597
	I1017 20:02:13.882190  287441 notify.go:220] Checking for updates...
	I1017 20:02:13.882463  287441 config.go:182] Loaded profile config "multinode-813597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:02:13.882482  287441 status.go:174] checking status of multinode-813597 ...
	I1017 20:02:13.882943  287441 cli_runner.go:164] Run: docker container inspect multinode-813597 --format={{.State.Status}}
	I1017 20:02:13.902690  287441 status.go:371] multinode-813597 host status = "Stopped" (err=<nil>)
	I1017 20:02:13.902719  287441 status.go:384] host is not running, skipping remaining checks
	I1017 20:02:13.902726  287441 status.go:176] multinode-813597 status: &{Name:multinode-813597 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:02:13.902775  287441 status.go:174] checking status of multinode-813597-m02 ...
	I1017 20:02:13.903022  287441 cli_runner.go:164] Run: docker container inspect multinode-813597-m02 --format={{.State.Status}}
	I1017 20:02:13.921088  287441 status.go:371] multinode-813597-m02 host status = "Stopped" (err=<nil>)
	I1017 20:02:13.921117  287441 status.go:384] host is not running, skipping remaining checks
	I1017 20:02:13.921124  287441 status.go:176] multinode-813597-m02 status: &{Name:multinode-813597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-813597 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-813597 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (44.190664939s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-813597 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-813597
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-813597-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-813597-m02 --driver=docker  --container-runtime=crio: exit status 14 (70.029831ms)

                                                
                                                
-- stdout --
	* [multinode-813597-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-813597-m02' is duplicated with machine name 'multinode-813597-m02' in profile 'multinode-813597'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-813597-m03 --driver=docker  --container-runtime=crio
E1017 20:03:05.046576  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-813597-m03 --driver=docker  --container-runtime=crio: (22.492941024s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-813597
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-813597: exit status 80 (292.208405ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-813597 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-813597-m03 already exists in multinode-813597-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-813597-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-813597-m03: (2.400746387s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.31s)

                                                
                                    
x
+
TestPreload (109.87s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-967307 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-967307 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (49.261920127s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-967307 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-967307 image pull gcr.io/k8s-minikube/busybox: (2.331460857s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-967307
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-967307: (5.912761901s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-967307 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1017 20:04:34.171341  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-967307 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.665842811s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-967307 image list
helpers_test.go:175: Cleaning up "test-preload-967307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-967307
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-967307: (2.477801358s)
--- PASS: TestPreload (109.87s)

                                                
                                    
x
+
TestScheduledStopUnix (97.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-910370 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-910370 --memory=3072 --driver=docker  --container-runtime=crio: (21.611058694s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-910370 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-910370 -n scheduled-stop-910370
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-910370 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1017 20:05:40.229714  139217 retry.go:31] will retry after 98.292µs: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.230929  139217 retry.go:31] will retry after 128.736µs: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.232032  139217 retry.go:31] will retry after 175.456µs: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.233129  139217 retry.go:31] will retry after 306.129µs: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.234255  139217 retry.go:31] will retry after 646.095µs: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.235383  139217 retry.go:31] will retry after 536.426µs: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.236540  139217 retry.go:31] will retry after 800.853µs: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.237706  139217 retry.go:31] will retry after 2.328973ms: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.240942  139217 retry.go:31] will retry after 2.904165ms: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.244190  139217 retry.go:31] will retry after 5.454283ms: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.250424  139217 retry.go:31] will retry after 4.272775ms: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.255675  139217 retry.go:31] will retry after 12.048252ms: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.267872  139217 retry.go:31] will retry after 7.427562ms: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.276134  139217 retry.go:31] will retry after 23.159961ms: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
I1017 20:05:40.300456  139217 retry.go:31] will retry after 25.840272ms: open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/scheduled-stop-910370/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-910370 --cancel-scheduled
E1017 20:05:57.242614  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-910370 -n scheduled-stop-910370
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-910370
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-910370 --schedule 15s
E1017 20:06:08.117411  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-910370
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-910370: exit status 7 (70.845881ms)

                                                
                                                
-- stdout --
	scheduled-stop-910370
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-910370 -n scheduled-stop-910370
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-910370 -n scheduled-stop-910370: exit status 7 (68.313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-910370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-910370
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-910370: (4.131466585s)
--- PASS: TestScheduledStopUnix (97.14s)

                                                
                                    
x
+
TestInsufficientStorage (10.22s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-621455 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-621455 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.70790663s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"49527536-73f7-4ce1-9e01-19d3f755454a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-621455] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"752ebf12-168b-42ae-851c-79982952b1c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21664"}}
	{"specversion":"1.0","id":"3eccea26-1503-45d1-9030-59c666c48146","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bf7cc215-6389-4e0f-9de1-3c540fe89f04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig"}}
	{"specversion":"1.0","id":"7ee00aae-acbe-4cc4-8c68-e3509c64da9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube"}}
	{"specversion":"1.0","id":"25ffe235-f891-431a-b357-b4027609fafb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2c942e2b-c7e2-4f29-9cfe-982f71945449","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1f7d271c-16a0-4bec-b6a3-e685651eec6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1b3d7af6-fa68-478a-ae06-7fab49f5c65e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"673a11f1-e8db-4018-b4c3-c2373df97c74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2dd7a922-acde-41cf-942e-d7860007c405","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"49c840ef-6234-4ff8-bbcc-1fc73a9bf1da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-621455\" primary control-plane node in \"insufficient-storage-621455\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"894b4bd0-ae6c-46fe-a61c-f25f86422e5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1219e98c-367d-4e5e-b98b-ed7e0d4d1bd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab57dc93-d308-4611-9b6a-2e14fcb24899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-621455 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-621455 --output=json --layout=cluster: exit status 7 (287.119235ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-621455","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-621455","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1017 20:07:03.316418  307663 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-621455" does not appear in /home/jenkins/minikube-integration/21664-135723/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-621455 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-621455 --output=json --layout=cluster: exit status 7 (286.250367ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-621455","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-621455","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1017 20:07:03.603730  307775 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-621455" does not appear in /home/jenkins/minikube-integration/21664-135723/kubeconfig
	E1017 20:07:03.614104  307775 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/insufficient-storage-621455/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-621455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-621455
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-621455: (1.936000978s)
--- PASS: TestInsufficientStorage (10.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (58.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.28712376 start -p running-upgrade-097245 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.28712376 start -p running-upgrade-097245 --memory=3072 --vm-driver=docker  --container-runtime=crio: (25.602920576s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-097245 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-097245 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.258949454s)
helpers_test.go:175: Cleaning up "running-upgrade-097245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-097245
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-097245: (4.619009694s)
--- PASS: TestRunningBinaryUpgrade (58.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (400.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.169713826s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-660693
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-660693: (2.366624725s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-660693 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-660693 status --format={{.Host}}: exit status 7 (77.411327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.737291578s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-660693 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (75.274355ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-660693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-660693
	    minikube start -p kubernetes-upgrade-660693 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6606932 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-660693 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-660693 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m36.025208644s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-660693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-660693
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-660693: (3.777837358s)
--- PASS: TestKubernetesUpgrade (400.29s)

                                                
                                    
x
+
TestMissingContainerUpgrade (76.94s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2453259229 start -p missing-upgrade-159057 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2453259229 start -p missing-upgrade-159057 --memory=3072 --driver=docker  --container-runtime=crio: (25.758728111s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-159057
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-159057: (11.948241189s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-159057
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-159057 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-159057 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.06236266s)
helpers_test.go:175: Cleaning up "missing-upgrade-159057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-159057
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-159057: (2.51393784s)
--- PASS: TestMissingContainerUpgrade (76.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-275969 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-275969 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (87.572516ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-275969] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestPause/serial/Start (57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-538803 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-538803 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (57.002172245s)
--- PASS: TestPause/serial/Start (57.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-275969 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-275969 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.909066729s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-275969 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (57.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.423209941 start -p stopped-upgrade-289368 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.423209941 start -p stopped-upgrade-289368 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.753333957s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.423209941 -p stopped-upgrade-289368 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.423209941 -p stopped-upgrade-289368 stop: (1.917421052s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-289368 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-289368 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.038610145s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (57.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-275969 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-275969 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (15.196667757s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-275969 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-275969 status -o json: exit status 2 (321.805354ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-275969","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-275969
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-275969: (2.03792165s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-275969 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-275969 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.465086282s)
--- PASS: TestNoKubernetes/serial/Start (5.47s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-538803 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-538803 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.645602997s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-289368
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-289368: (1.107332694s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-275969 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-275969 "sudo systemctl is-active --quiet service kubelet": exit status 1 (333.115964ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (2.423963995s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-275969
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-275969: (2.707122653s)
--- PASS: TestNoKubernetes/serial/Stop (2.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-275969 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-275969 --driver=docker  --container-runtime=crio: (6.8417603s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-275969 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-275969 "sudo systemctl is-active --quiet service kubelet": exit status 1 (327.930305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-684669 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-684669 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (222.057847ms)

                                                
                                                
-- stdout --
	* [false-684669] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:08:23.042026  334262 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:08:23.042262  334262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:23.042289  334262 out.go:374] Setting ErrFile to fd 2...
	I1017 20:08:23.042311  334262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:23.042662  334262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-135723/.minikube/bin
	I1017 20:08:23.043359  334262 out.go:368] Setting JSON to false
	I1017 20:08:23.044901  334262 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6651,"bootTime":1760725052,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:08:23.045062  334262 start.go:141] virtualization: kvm guest
	I1017 20:08:23.048294  334262 out.go:179] * [false-684669] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:08:23.049958  334262 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:08:23.050029  334262 notify.go:220] Checking for updates...
	I1017 20:08:23.053488  334262 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:08:23.059097  334262 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-135723/kubeconfig
	I1017 20:08:23.060833  334262 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-135723/.minikube
	I1017 20:08:23.062541  334262 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:08:23.064134  334262 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:08:23.066397  334262 config.go:182] Loaded profile config "force-systemd-env-834947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:08:23.066550  334262 config.go:182] Loaded profile config "missing-upgrade-159057": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1017 20:08:23.066750  334262 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:08:23.104538  334262 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1017 20:08:23.104710  334262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:08:23.184116  334262 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-17 20:08:23.171087258 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1017 20:08:23.184225  334262 docker.go:318] overlay module found
	I1017 20:08:23.188914  334262 out.go:179] * Using the docker driver based on user configuration
	I1017 20:08:23.190409  334262 start.go:305] selected driver: docker
	I1017 20:08:23.190437  334262 start.go:925] validating driver "docker" against <nil>
	I1017 20:08:23.190455  334262 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:08:23.193218  334262 out.go:203] 
	W1017 20:08:23.195451  334262 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1017 20:08:23.196968  334262 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-684669 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-684669" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-684669" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-684669

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-684669"

                                                
                                                
----------------------- debugLogs end: false-684669 [took: 3.69914805s] --------------------------------
helpers_test.go:175: Cleaning up "false-684669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-684669
--- PASS: TestNetworkPlugins/group/false (4.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1017 20:09:34.171004  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/functional-558322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.578875678s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.206517243s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-726816 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d45668ea-4755-40f0-8901-dc50444939c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d45668ea-4755-40f0-8901-dc50444939c7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003996114s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-726816 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-726816 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-726816 --alsologtostderr -v=3: (16.025810729s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-449580 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f84fa1c2-b435-4d7e-8356-a847e5291ee8] Pending
helpers_test.go:352: "busybox" [f84fa1c2-b435-4d7e-8356-a847e5291ee8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f84fa1c2-b435-4d7e-8356-a847e5291ee8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003712342s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-449580 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-726816 -n old-k8s-version-726816
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-726816 -n old-k8s-version-726816: exit status 7 (75.447508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-726816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-726816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (43.9985829s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-726816 -n old-k8s-version-726816
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-449580 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-449580 --alsologtostderr -v=3: (16.18939961s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449580 -n no-preload-449580
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449580 -n no-preload-449580: exit status 7 (70.853512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-449580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-449580 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.850990453s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449580 -n no-preload-449580
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dkhv5" [8d572a9b-dd03-4904-83d4-3dfb0680522e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003553396s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dkhv5" [8d572a9b-dd03-4904-83d4-3dfb0680522e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003769853s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-726816 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-726816 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.107164677s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dkzr6" [92cf2d50-aa83-4686-8f20-055646b5e2b8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003313459s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dkzr6" [92cf2d50-aa83-4686-8f20-055646b5e2b8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004841488s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-449580 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-449580 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.411815053s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (27.952962767s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-051488 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b23380a0-e664-4975-ad28-1996f0687b6c] Pending
helpers_test.go:352: "busybox" [b23380a0-e664-4975-ad28-1996f0687b6c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b23380a0-e664-4975-ad28-1996f0687b6c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004365137s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-051488 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-051488 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-051488 --alsologtostderr -v=3: (18.118911707s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-051083 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-051083 --alsologtostderr -v=3: (2.512649305s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051083 -n newest-cni-051083
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051083 -n newest-cni-051083: exit status 7 (67.496547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-051083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-051083 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.539780785s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051083 -n newest-cni-051083
E1017 20:13:05.046994  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/addons-808548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-563805 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4b56d57d-2571-48ae-86e3-1ba948f2a6fa] Pending
helpers_test.go:352: "busybox" [4b56d57d-2571-48ae-86e3-1ba948f2a6fa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4b56d57d-2571-48ae-86e3-1ba948f2a6fa] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003956705s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-563805 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-051488 -n embed-certs-051488
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-051488 -n embed-certs-051488: exit status 7 (81.987729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-051488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-051488 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.968442471s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-051488 -n embed-certs-051488
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-051083 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-563805 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-563805 --alsologtostderr -v=3: (18.803533847s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805: exit status 7 (74.302216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-563805 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-563805 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.525717202s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-563805 -n default-k8s-diff-port-563805
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.923443623s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xkxdm" [52db9f43-c27f-4ced-bad4-085de15d48d2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003893707s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xkxdm" [52db9f43-c27f-4ced-bad4-085de15d48d2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003665001s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-051488 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-051488 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (38.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (38.12790751s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (38.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cfv55" [8cb77f18-44bb-401c-b230-621ccb6ff4a4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003609348s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cfv55" [8cb77f18-44bb-401c-b230-621ccb6ff4a4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004109298s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-563805 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-563805 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-684669 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-684669 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2z9sg" [45a37ac3-a02f-4a05-a886-6addcbc920d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2z9sg" [45a37ac3-a02f-4a05-a886-6addcbc920d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003956928s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (52.680891848s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-684669 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-v2zb4" [85f3f745-9cee-4622-bff3-5cfafbad8fe5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004448251s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-684669 "pgrep -a kubelet"
I1017 20:14:54.050229  139217 config.go:182] Loaded profile config "kindnet-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-684669 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jpwcm" [febf91d5-924f-4601-9b18-cbf8a7bd9ecf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jpwcm" [febf91d5-924f-4601-9b18-cbf8a7bd9ecf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.005769958s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (56.960073492s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-684669 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1017 20:15:21.743504  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/old-k8s-version-726816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:21.750257  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/old-k8s-version-726816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:21.761689  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/old-k8s-version-726816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:21.783552  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/old-k8s-version-726816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:21.825102  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/old-k8s-version-726816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m5.727501091s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1017 20:15:26.875510  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/old-k8s-version-726816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.347809599s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-c5995" [e29c8e33-6043-4473-9866-804a9c8f1a6e] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1017 20:15:31.997891  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/old-k8s-version-726816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-c5995" [e29c8e33-6043-4473-9866-804a9c8f1a6e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003793986s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-684669 "pgrep -a kubelet"
I1017 20:15:33.727041  139217 config.go:182] Loaded profile config "calico-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-684669 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-67szk" [636fe2ad-827d-4818-95d8-b18940ecfed6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-67szk" [636fe2ad-827d-4818-95d8-b18940ecfed6] Running
E1017 20:15:42.239347  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/old-k8s-version-726816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:42.680031  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:42.686511  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:42.697967  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:42.719470  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:42.760931  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:42.842404  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:43.004034  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004119993s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-684669 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1017 20:15:43.326121  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-684669 "pgrep -a kubelet"
I1017 20:15:53.766314  139217 config.go:182] Loaded profile config "custom-flannel-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-684669 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wkwcg" [5490717d-dc22-4cf1-b71d-5e8d2ceabd1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wkwcg" [5490717d-dc22-4cf1-b71d-5e8d2ceabd1d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00398911s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-684669 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1017 20:16:03.175092  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (36.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-684669 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (36.686333785s)
--- PASS: TestNetworkPlugins/group/bridge/Start (36.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-684669 "pgrep -a kubelet"
I1017 20:16:09.508627  139217 config.go:182] Loaded profile config "enable-default-cni-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-684669 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5mb75" [b9ef4395-01b5-4cdd-acb2-a841bda4d514] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5mb75" [b9ef4395-01b5-4cdd-acb2-a841bda4d514] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003458516s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-f8k69" [f767c73f-2064-4f7f-bbf4-32f5d5491022] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003551469s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-684669 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-684669 "pgrep -a kubelet"
I1017 20:16:19.449384  139217 config.go:182] Loaded profile config "flannel-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-684669 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bxvsm" [04152338-5932-4ffd-8113-f75c3a98d2bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bxvsm" [04152338-5932-4ffd-8113-f75c3a98d2bd] Running
E1017 20:16:23.656949  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/no-preload-449580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.005046484s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-684669 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-684669 "pgrep -a kubelet"
I1017 20:16:41.122976  139217 config.go:182] Loaded profile config "bridge-684669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-684669 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jbphm" [8a31c261-f207-4f31-9995-ffce3a02442e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1017 20:16:43.683553  139217 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-135723/.minikube/profiles/old-k8s-version-726816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-jbphm" [8a31c261-f207-4f31-9995-ffce3a02442e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003951891s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-684669 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-684669 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (26/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-270495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-270495
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-684669 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-684669" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-684669" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-684669

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-684669"

                                                
                                                
----------------------- debugLogs end: kubenet-684669 [took: 3.388559184s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-684669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-684669
I1017 20:08:22.924193  139217 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3956803100/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1017 20:08:22.942919  139217 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3956803100/001/docker-machine-driver-kvm2 version is 1.37.0
--- SKIP: TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-684669 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-684669" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-684669

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-684669" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-684669"

                                                
                                                
----------------------- debugLogs end: cilium-684669 [took: 5.54020847s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-684669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-684669
--- SKIP: TestNetworkPlugins/group/cilium (5.71s)

                                                
                                    
Copied to clipboard